Skip to main content

Full text of "Nuclear Medicine, Diagnostic Tomography and Imaging (03/02/2011)"

See other formats

Nuclear Medicine, Diagnostic 
Tomography and Imaging: 

X-ray, CAT, Scanning, PET, FFT Imaging 
and Microscopy 

PDF generated using the open source mwlib toolkit. See for more information. 
PDF generated at: Sat, 30 Oct 2010 10:30:20 UTC 



Book's Front Cover 1 

Early Medical Diagnostics using Nuclear Medicine 2 

Nuclear medicine 2 

Radiobiology 12 

Radiopharmacology 14 

Medical diagnosis 24 

Medical Imaging 28 

Medical Imaging Techniques 28 

Radiology 35 

Molecular imaging 41 

Tomography 46 

X-ray computed tomography 49 

Angiography 64 

Coronary catheterization 67 

PET Scanning 72 

Alzheimer's disease 82 

2D-FT NMR and Nuclear Magnetic Resonance Imaging 108 

Cancer screening 113 

Ultrasound Imaging 115 

Medical ultrasonography 126 

Advanced Experimental Techniques and Methods 137 

Optical Tomography and Imaging 137 

Breast cancer screening 139 

Fourier transform spectroscopy 145 

FT-Near Infrared Spectroscopy and Imaging 150 

Chemical imaging 154 

Hyperspectral imaging 161 

Multi-spectral image 166 

Fluorescence Imaging 168 

Fluorescence correlation spectroscopy 172 

Fluorescence cross-correlation spectroscopy 181 

Forster resonance energy transfer 182 

X-ray microscope 187 

Electron Microscope 189 

Atomic force microscope 196 

Neutron scattering 203 

ISIS neutron source 205 

Synchrotron 207 

Medical Diagnostics in Cardiology 213 

Cardiology 213 

Electrical conduction system of the heart 218 

Coronary disease 222 

Atherosclerosis 224 

Interventional cardiology 239 

Non-invasive Cardiology Techniques and Methods 2 4i 

SQUID 241 

Diabetes type 2 245 

Diabetes mellitus type 2 245 

Complex Systems Biology, Genetic Screening and Biostatistics 2 60 

Complex Systems Biology 260 

Complexity 267 

Complex adaptive system 273 

Computational biology 277 

Biostatistics 278 

Bioinformatics 281 

Genomics 288 

Computational genomics 291 

Proteomics 293 

Interactomics 299 

Biochemistry 302 

Quantum biochemistry 311 


Article Sources and Contributors 316 

Image Sources, Licenses and Contributors 322 

Article Licenses 

License 326 

Book's Front Cover 

Book's Front Cover 

High resolution PET Scanner 

Early Medical Diagnostics using Nuclear 


Nuclear medicine 

A whole body PET/CT Fusion image 

Nuclear medicine is a branch or specialty 
of medicine and medical imaging that uses 
radionuclides and relies on the process of 
radioactive decay in the diagnosis and 
treatment of disease. 

In nuclear medicine procedures, 
radionuclides are combined with other 
chemical compounds or pharmaceuticals to 
form radiopharmaceuticals. These 

radiopharmaceuticals, once administered to 
the patient, can localize to specific organs or 
cellular receptors. This property of 
radiopharmaceuticals allows nuclear 
medicine the ability to image the extent of a 
disease-process in the body, based on the 
cellular function and physiology, rather than 
relying on physical changes in the tissue 
anatomy. In some diseases nuclear medicine 
studies can identify medical problems at an 
earlier stage than other diagnostic tests. 

Treatment of disease, based on metabolism 
or uptake or binding of a ligand, may also be 
accomplished, similar to other areas of 
pharmacology. However, 

radiopharmaceuticals rely on the 
tissue-destructive power of short-range 
ionizing radiation. 

Description of the field 

In nuclear medicine imaging, radiopharmaceuticals are taken internally, for example intravenously or orally. Then, 
external detectors (gamma cameras) capture and form images from the radiation emitted 

Normal whole body PET/CT scan with FDG-18. 
The whole body PET/CT scan is commonly used 
in the detection, staging and follow-up of various 

Nuclear medicine 

A nuclear medicine whole body bone scan. The 
nuclear medicine whole body bone scan is 

generally used in evaluations of various bone 

related pathology, such as for bone pain, stress 
fracture, nonmalignant bone lesions, bone 

infections, or the spread of cancer to the bone. 

by the radiopharmaceuticals. This process is unlike a diagnostic X-ray 
where external radiation is passed through the body to form an image. 

There are several techniques of diagnostic nuclear medicine. 
Scintigraphy ("scint") is the use of internal radionuclides to create 
two-dimensional images. SPECT is a 3D tomographic technique that 
uses gamma camera data from many projections and can be 
reconstructed in different planes. Positron emission tomography (PET) 
uses coincidence detection to image functional processes. 

Nuclear medicine tests differ from most other imaging modalities in 
that diagnostic tests primarily show the physiological function of the 
system being investigated as opposed to traditional anatomical imaging 
such as CT or MRI. Nuclear medicine imaging studies are generally 
more organ or tissue specific (e.g.: lungs scan, heart scan, bone scan, 
brain scan, etc.) than those in conventional radiology imaging, which 
focus on a particular section of the body (e.g.: chest X-ray, 
abdomen/pelvis CT scan, head CT scan, etc.). In addition, there are 
nuclear medicine studies that allow imaging of the whole body based 
on certain cellular receptors or functions. Examples are whole body 
PET scan or PET/CT scans, gallium scans, indium white blood cell 
scans, MIBG and octreotide scans. 

While the ability of nuclear metabolism to image disease processes 
from differences in metabolism is unsurpassed, it is not unique. Certain 
techniques such as fMRI image tissues (particularly cerebral tissues) 
by blood flow, and thus show metabolism. Also, contrast-enhancement 
techniques in both CT and MRI show regions of tissue which are 
handling pharmaceuticals differently, due to an inflammatory process. 

Diagnostic tests in nuclear medicine exploit the way that the body 

handles substances differently when there is disease or pathology 

present. The radionuclide introduced into the body is often chemically 

bound to a complex that acts characteristically within the body; this is 

commonly known as a tracer. In the presence of disease, a tracer will 

often be distributed around the body and/or processed differently. For 

example, the ligand methylene-diphosphonate (MDP) can be 

preferentially taken up by bone. By chemically attaching 

technetium-99m to MDP, radioactivity can be transported and attached 

to bone via the hydroxyapatite for imaging. Any increased 

physiological function, such as due to a fracture in the bone, will 

usually mean increased concentration of the tracer. This often results in the appearance of a 'hot-spot' which is a 

focal increase in radio-accumulation, or a general increase in radio-accumulation throughout the physiological 

system. Some disease processes result in the exclusion of a tracer, resulting in the appearance of a 'cold-spot'. Many 

tracer complexes have been developed to image or treat many different organs, glands, and physiological processes. 

Nuclear Medicine myocardial perfusion scan with 

Thallium-201 for the rest images (bottom rows) 

and Tc-Sestamibi for the stress images (top 

rows). The nuclear medicine myocardial 

perfusion scan plays a pivotal role in the 

noninvasive evaluation of coronary artery 

disease. The study not only identifies patients 

with coronary artery disease, it also provides 

overall prognostic information or overall risk of 

adverse cardiac events for the patient. 

Nuclear medicine 

Hybrid scanning techniques 

In some centers, the nuclear medicine scans can be superimposed, 
using software or hybrid cameras, on images from modalities such as 
CT or MRI to highlight the part of the body in which the 
radiopharmaceutical is concentrated. This practice is often referred to 
as image fusion or co-registration, for example SPECT/CT and 
PET/CT. The fusion imaging technique in nuclear medicine provides 
information about the anatomy and function, which would otherwise 
be unavailable, or would require a more invasive procedure or surgery. 

Practical concerns in nuclear imaging 

The amount of radiation from diagnostic nuclear medicine procedures 
is kept within a safe limit and follows the "ALARA" (As Low As 
Reasonably Achievable) principle. The radiation dose from nuclear 
medicine imaging varies greatly depending on the type of study. The 
effective radiation dose can be lower than or comparable to the annual 
background radiation dose. It can also be in the range or higher than 
the radiation dose from an abdomen/pelvis CT scan. [2] 

Some nuclear medicine procedures require special patient preparation 
before the study to obtain the most accurate result. Pre-imaging 
preparations may include dietary preparation or the withholding of 
certain medications. Patients are encouraged to consult with the 
nuclear medicine department prior to a scan. 

Nuclear medicine therapy 

In nuclear medicine therapy, the radiation treatment dose is 
administered internally (e.g. intravenous or oral routes) rather from an 
external radiation source. 

The radiopharmaceuticals used in Nuclear Medicine therapy emit 
ionizing radiation that travels only a short distance, thereby minimizing 
unwanted side effects and damage to noninvolved organs or nearby 
structures. Most Nuclear Medicine therapies can be performed as 
outpatient procedures since there are few side effects from the 
treatment and the radiation exposure to the general public can be kept 
within a safe limit. Common Nuclear Medicine therapies include 
1311-sodium iodide for hyperthyroidism and thyroid cancer, 
Yttrium-90-ibritumomab tiuxetan (Zevalin) and 

Iodine-131-tositumomab (Bexxar) for refractory Lymphoma, 


I-MIBG (metaiodobenzylguanidine) for neuroendocrine tumors, and 

palliative bone pain treatment with Samarium-153 or Strontium-89. In 

some centers the nuclear medicine department may also use implanted 

capsules of isotopes (brachytherapy) to treat cancer. 

A nuclear medicine parathyroid scan 

demonstrates a parathyroid adenoma adjacent to 

the left inferior pole of the thyroid gland. . The 

above study was performed with 

Technetium-Sestamibi (1st column) and 

Iodine- 123 (2nd column) simultaneous imaging 

and the subtraction technique (3rd column). 

m w w w 

1 s a s 

* * * * 

i B I I 

• « • « 

a s a . 

Normal hepatobiliary scan (HIDA scan). The 
nuclear medicine hepatobiliary scan is clinically 
useful in the detection of the gallbladder disease. 



i 1 

w- ^P* 


if ' 

Normal pulmonary ventilation and perfusion 

(V/Q) scan. The nuclear medicine V/Q scan is 

useful in the evaluation of pulmonary embolism. 

Nuclear medicine 

Most nuclear medicine therapies will also require appropriate patient 
preparation prior to a treatment. Therefore, consultation with the 
Nuclear Medicine department is recommended prior to therapy. 

Molecular medicine 

In the future, nuclear medicine may be known as molecular medicine. 
As our understanding of biological processes in the cells of living 
organism expands, specific probes can be developed to allow 
visualization, characterization, and quantification of biologic processes 
at the cellular and subcellular levels. Nuclear Medicine is an ideal 
specialty to adapt to the new discipline of molecular medicine, because 
of its emphasis on function and its utilization of imaging agents that 
are specific for a particular disease process. 



Thyroid scan with Iodine- 123 for evaluation of 


^ ^^^B 

I ml fe 1 


1 4 

r . v. 

nu *• 

V. » 

i 1 **> 


n Wm 1 i ' 



ilWP '■^ ,; ' 

i ' 

I 1 i 

Abnormal whole body PET/CT scan with 

multiple metastases from a cancer. The whole 

body PET/CT scan has became an important tool 

in the evaluation of cancer. 

A nuclear medicine SPECT liver scan with 

technetium-99m labeled autologous red blood 

cells. A focus of high uptake (arrow) in the liver 

is consistent with a hemangioma. 

Nuclear medicine 




.1 - 


Iodine-123 whole body scan for thyroid cancer 
evaluation. The study above was performed after 
the total thyroidectomy and TSH stimulation with 

thyroid hormone medication withdrawal. The 

study shows a small residual thyroid tissue in the 

neck and a mediastinum lesion, consistent with 

the thyroid cancer metastatic disease. The uptakes 

in the stomach and bowel are normal physiologic 



The history of nuclear medicine is rich with contributions from gifted scientists across different disciplines in 
physics, chemistry, engineering, and medicine. The multidisciplinary nature of Nuclear Medicine makes it difficult 
for medical historians to determine the birthdate of Nuclear Medicine. This can probably be best placed between the 
discovery of artificial radioactivity in 1934 and the production of radionuclides by Oak Ridge National Laboratory 
for medicine related use, in 1946. 

Many historians consider the discovery of artificially produced radionuclides by Frederic Joliot-Curie and Irene 
Joliot-Curie in 1934 as the most significant milestone in Nuclear Medicine. In February 1934, they reported the 
first artificial production of radioactive material in the Nature journal, after discovering radioactivity in aluminum 
foil that was irradiated with a polonium preparation. Their work built upon earlier discoveries by Wilhelm Konrad 
Roentgen for X-ray, Henri Becquerel for radioactive uranium salts, and Marie Curie (mother of Irene Curie) for 
radioactive thorium, polonium and coining the term "radioactivity." Taro Takemi studied the application of nuclear 
physics to medicine in the 1930s. The history of Nuclear Medicine will not be complete without mentioning these 
early pioneers. 

Nuclear medicine gained public recognition as a potential specialty on December 7, 1946 when an article was 
published in the Journal of the American Medical Association by Sam Seidlin. The article described a successful 
treatment of a patient with thyroid cancer metastases using radioiodine (1-131). This is considered by many 
historians as the most important article ever published in Nuclear Medicine. Although, the earliest use of 1-131 
was devoted to therapy of thyroid cancer, its use was later expanded to include imaging of the thyroid gland, 
quantification of the thyroid function, and therapy for hyperthyroidism. 

Widespread clinical use of Nuclear Medicine began in the early 1950s, as knowledge expanded about radionuclides, 
detection of radioactivity, and using certain radionuclides to trace biochemical processes. Pioneering works by 
Benedict Cassen in developing the first rectilinear scanner and Hal O. Anger's scintillation camera (Anger camera) 
broadened the young discipline of Nuclear Medicine into a full-fledged medical imaging specialty. 

Nuclear medicine 

In these years of Nuclear Medicine, the growth was phenomenal. The Society of Nuclear Medicine was formed in 
1954 in Spokane, Washington, USA. In 1960, the Society began publication of the Journal of Nuclear Medicine, the 
premier scientific journal for the discipline in America. There was a flurry of research and development of new 
radionuclides and radiopharmaceuticals for use with the imaging devices and for in-vitro studies5. 

Among many radionuclides that were discovered for medical-use, none were as important as the discovery and 
development of Technetium-99m. It was first discovered in 1937 by C. Perrier and E. Segre as an artificial element 
to fill space number 43 in the Periodic Table. The development of generator system to produce Technetium-99m in 
the 1960s became a practical method for medical use. Today, Technetium-99m is the most utilized element in 
Nuclear Medicine and is employed in a wide variety of Nuclear Medicine imaging studies. 

By the 1970s most organs of the body could be visualized using Nuclear Medicine procedures. In 1971, American 
Medical Association officially recognized nuclear medicine as a medical specialty. In 1972, the American Board 
of Nuclear Medicine was established, cementing Nuclear Medicine as a medical specialty. 

In the 1980s, radiopharmaceuticals were designed for use in diagnosis of heart disease. The development of single 
photon emission tomography, around the same time, led to three-dimensional reconstruction of the heart and 
establishment of the field of Nuclear Cardiology. 

More recent developments in Nuclear Medicine include the invention of the first positron emission tomography 
scanner (PET). The concept of emission and transmission tomography, later developed into single photon emission 
computed tomography (SPECT), was introduced by David E. Kuhl and Roy Edwards in the late 1950s . Their work 
led to the design and construction of several tomographic instruments at the University of Pennsylvania. 
Tomographic imaging techniques were further developed at the Washington University School of Medicine. These 
innovations led to fusion imaging with SPECT and CT by Bruce Hasegawa from University of California San 
Francisco (UCSF), and the first PET/CT prototype by D. W. Townsend from University of Pittsburgh in 1998 . 

PET and PET/CT imaging experienced slower growth in its early years owing to the cost of the modality and the 
requirement for an on-site or nearby cyclotron. However, an administrative decision to approve medical 
reimbursement of limited PET and PET/CT applications in oncology has led to phenomenal growth and widespread 
acceptance over the last few years. PET/CT imaging is now an integral part of oncology for diagnosis, staging and 
treatment monitoring. 

Source of radionuclides, with notes on a few radiopharmaceuticals 

About a third of the world's supply, and most of North America's supply, of medical isotopes are produced at the 
Chalk River Laboratories in Chalk River, Ontario, Canada. (Another third of the world's supply, and most of 
Europe's supply, are produced at the Petten nuclear reactor in the Netherlands.) The Canadian Nuclear Safety 
Commission ordered the NRU reactor to be shut down on November 18, 2007 for regularly scheduled maintenance 
and an upgrade of the safety systems to modern standards. The upgrade took longer than expected and in December 
2007 a critical shortage of medical isotopes occurred. The Canadian government unanimously passed emergency 
legislation, allowing the reactor to re-start on 16 December 2007, and production of medical isotopes to continue. 

The Chalk River reactor is used to irradiate materials with neutrons which are produced in great quantity during the 
fission of U-235. These neutrons change the nucleus of the irradiated material by adding a neutron, or by splitting it 
in the process of nuclear fission. In a reactor, one of the fission products of uranium is molybdenum-99 which is 
extracted and shipped to radiopharmaceutical houses all over North America. The Mo-99 radioactively beta decays 
with a half-life of 2.7 days, turning initially into Tc-99m, which is then extracted (milked) from a "moly cow" (see 
technetium-99m generator). The Tc-99m then further decays, while inside a patient, releasing a gamma photon 
which is detected by the gamma camera. It decays to its ground state of Tc-99, which is relatively non-radioactive 
compared to Tc-99m. 

Nuclear medicine 

The most commonly used radioisotope in PET F-18, is not produced in any nuclear reactor, but rather in a circular 
acclererator called a cyclotron. The cyclotron is used to accelerate protons to bombard the stable heavy isotope of 
oxygen 0-18. The 0-18 constitutes about 0.20% of ordinary oxygen (mostly 0-16), from which it is extracted. The 
F-18 is then typically used to make FDG (see this link for more information on this process). 

Common isotopes used in nuclear medicine 










fluorine- 1 8 



109.77 m 

P + 

511 (193%) 

0.664 (97%) 


67 Ga 


3.26 d 


93 (39%), 
185 (21%), 
300 (17%) 


81m Kr 


13.1 s 


190 (68%) 



82 Rb 


1.27 m 

P + 


3.379 (95%) 




6.01 h 


140 (89%) 


indium- 111 

11 'in 


2.80 d 


171 (90%), 

245 (94%) 


iodine- 123 



13.3 h 


159 (83%)) 


xenon- 133 

133 Xe 


5.24 d 


81 (31%) 

0.364 (99%) 

thallium- 201 



3.04 d 




167 (10%) 



90 y 


2.67 d 



2.280 (100%) 

iodine- 131 



8.02 d 



0.807 (100%) 

Z = atomic numb 
photons = princif 
p = beta maximu 
P + = p + decay ; p" 
* X-rays from pr 

er, the nun 
le photon 
n energy i 
= p decay 
Jgeny, mer 

l me 
IT = 

rf protons; 
ies in kilo- 

isomeric t 


r 1/2 = hai 

electron \ 
volts, Mt 

f-life; decay = mode of decay 
olts, keV, (abundance/decay) 
V, (abundance/decay) 
ec = electron capture 

A typical nuclear medicine study involves administration of a radionuclide into the body by intravenous injection in 
liquid or aggregate form, ingestion while combined with food, inhalation as a gas or aerosol, or rarely, injection of a 
radionuclide that has undergone micro-encapsulation. Some studies require the labeling of a patient's own blood cells 
with a radionuclide (leukocyte scintigraphy and red blood cell scintigraphy). Most diagnostic radionuclides emit 
gamma rays, while the cell-damaging properties of beta particles are used in therapeutic applications. Refined 
radionuclides for use in nuclear medicine are derived from fission or fusion processes in nuclear reactors, which 
produce radionuclides with longer half-lives, or cyclotrons, which produce radionuclides with shorter half-lives, or 
take advantage of natural decay processes in dedicated generators, i.e. molybdenum/technetium or 

The most commonly used intravenous radionuclides are: 

• Technetium-99m (technetium-99m) 

• Iodine- 123 and 131 

Nuclear medicine 

• Thallium-201 

• Gallium-67 

• Fluorine- 1 8 Fluorodeoxyglucose 

• Indium- 1 1 1 Labeled Leukocytes 

The most commonly used gaseous/aerosol radionuclides are: 

• Xenon- 133 

• Krypton-81m 

• Technetium-99m Technegas 

• Technetium-99m DTPA 


The end result of the nuclear medicine imaging process is a "dataset" comprising one or more images. In 
multi-image datasets the array of images may represent a time sequence (i.e. cine or movie) often called a "dynamic" 
dataset, a cardiac gated time sequence, or a spatial sequence where the gamma-camera is moved relative to the 
patient. SPECT (single photon emission computed tomography) is the process by which images acquired from a 
rotating gamma-camera are reconstructed to produce an image of a "slice" through the patient at a particular position. 
A collection of parallel slices form a slice-stack, a three-dimensional representation of the distribution of 
radionuclide in the patient. 

The nuclear medicine computer may require millions of lines of source code to provide quantitative analysis 
packages for each of the specific imaging techniques available in nuclear medicine. 

Time sequences can be further analysed using kinetic models such as multi-compartment models or a Patlak plot. 

Radiation dose 

A patient undergoing a nuclear medicine procedure will receive a radiation dose. Under present international 
guidelines it is assumed that any radiation dose, however small, presents a risk. The radiation doses delivered to a 
patient in a nuclear medicine investigation present a very small risk of inducing cancer. In this respect it is similar to 
the risk from X-ray investigations except that the dose is delivered internally rather than from an external source 
such as an X-ray machine. 

The radiation dose from a nuclear medicine investigation is expressed as an effective dose with units of sieverts 
(usually given in millisieverts, mSv). The effective dose resulting from an investigation is influenced by the amount 
of radioactivity administered in megabecquerels (MBq), the physical properties of the radiopharmaceutical used, its 
distribution in the body and its rate of clearance from the body. 

Effective doses can range from 6 u,Sv (0.006 mSv) for a 3 MBq chromium-51 EDTA measurement of glomerular 
filtration rate to 37 mSv for a 150 MBq thallium-201 non-specific tumour imaging procedure. The common bone 
scan with 600 MBq of technetium-99m-MDP has an effective dose of approximately 3.5 mSv (1). 

Formerly, units of measurement were the curie (Ci), being 3.7E10 Bq, and also 1.0 grams of Radium (Ra-226); the 
rad (radiation absorbed dose), now replaced by the gray; and the rem (Rontgen equivalent man), now replaced with 
the sievert. The rad and rem are essentially equivalent for almost all nuclear medicine procedures, and only alpha 
radiation will produce a higher Rem or Sv value, due to its much higher Relative Biological Effectiveness (RBE). 
Alpha emitters are nowadays rarely used in nuclear medicine, but were used extensively before the advent of nuclear 
reactor and accelerator produced radionuclides. The concepts involved in radiation exposure to humans is covered by 
the field of Health Physics. 

Nuclear medicine 10 

Nuclear Medicine Careers 
Nuclear Medicine Technologist 

The information below is adapted from the Society of Nuclear Medicine (SNM) website on a scientist career. For 
more information and educational requirements, please see training 

The nuclear medicine scientist works closely with the nuclear medicine physician. Some of the scientist's primary 
responsibilities are to: 

• Prepare and administer radioactive chemical compounds, known as radiopharmaceuticals 

• Perform patient imaging procedures using sophisticated radiation-detecting instrumentation 

• Accomplish computer processing and image enhancement 

• Analyze biologic specimens in the laboratory 

• Provide images, data analysis, and patient information to the physician for diagnostic interpretation. 

During an imaging procedure, the scientist works directly with the patient. The scientist: 

• Gains the patient's confidence by obtaining pertinent history, describing the procedure and answering any 

• Monitors the patient's physical condition during the course of the procedure 

• Notes any specific patient's comments which might indicate the need for additional images or might be useful to 
the physician in interpreting the results of the procedure. 

Nuclear medicine scientists work in a wide variety of clinical settings, such as 

• Community hospitals 

• University-affiliated teaching hospitals and medical centers 

• Outpatient imaging facilities 

• Public health institutions 

• Government and private research institutes. 

The physician career in nuclear medicine 

Nuclear medicine physicians are primarily responsible for interpretation of diagnostic nuclear medicine scans and 
treatment of certain diseases, such as cancer, thyroid disease and palliative bone pain. 

There are a variety of reasons why physicians have chosen to specialize in nuclear medicine. Some became nuclear 
medicine physicians because of their interest in nuclear physics and medical imaging. Others may have switched to 
nuclear medicine after training in other specialties, because of the regular work hours (on average about 8 to 10 
hours a day). Others have chosen nuclear medicine because of research opportunities in molecular medicine or 
molecular imaging. 

Nuclear medicine physicians frequently interact with other specialties in medicine and consult on a variety of clinical 
cases. A nuclear medicine report may save a patient from more invasive or high risk procedures, and/or lead to early 
disease diagnosis. Nuclear Medicine physicians can be called upon to consult on complex or equivocal clinical cases. 
Aside from consultations with other physicians, nuclear physicians may directly interact with patients through 
various nuclear medicine therapies (e.g.: 1131 thyroid therapy, refractory lymphoma treatment, palliative bone pain 

A disadvantage of a nuclear medicine career for a physician is that it suffers from low job turnover and a small job 
market, owing to the specialized nature of the field. Advantages of the field include job satisfaction and more regular 
hours than many fields of medicine, since very rarely are the procedures in this field performed on an emergency 

Nuclear medicine 1 1 

Nuclear medicine residency/training (physicians) 

The information below is adapted from the American Board of Nuclear Medicine (ABNM). For more information, 
please see ABNM [8] 

General professional education requirement in the United States of America: graduation from a medical school 
approved by the Liaison Committee on Medical Education or the American Association of Colleges of Osteopathic 

In USA the post-doctoral training in nuclear medicine can be approached from three different pathways: 

1 . If the person has successfully completed an accredited radiology residency then additional ONE year of training 
in Nuclear Medicine is required to be eligible for ABNM board certification. 

2. If the person has successfully completed a clinical residency (e.g. Internal Medicine, Family Medicine, Surgery, 
Neurology, etc.) then an additional TWO years of training in Nuclear Medicine is required to be eligible for 
ABNM board certification. 

3. If the person has successfully completed one year of preparatory post-doctoral training (internship) then an 
additional THREE years of training in Nuclear Medicine is required to be eligible for ABNM board certification. 

See also 

• Background radiation 

• Human experimentation in the United States 

• Radiology 


[1] > scintigraphy ( Citing: Dorland's Medical Dictionary 
for Health Consumers, 2007 by Saunders; Saunders Comprehensive Veterinary Dictionary, 3 ed. 2007; McGraw-Hill Concise Dictionary of 
Modern Medicine, 2002 by The McGraw-Hill Companies 

[2] [], Gambhir S. Just what is molecular medicine. 

[3] Edwards CI: Tumor localizing radionuclides in retrospect and prospect. Semin Nucl Med 3: 186—189, 1979. 

[4] Henkin R. et al: Nuclear Medicine. First edition 1996. ISBN 9780801677014. 

[5] from the Society of Nuclear Medicine. 


[7] Training (http://interactive.snm. org/index.cfm?PageID=985&RPID=193) 

[8] Acgme ( 

Further reading 

• Mas JC: A Patient's Guide to Nuclear Medicine Procedures: English-Spanish. Society of Nuclear Medicine, 2008. 
ISBN 978-0972647892 

• Taylor A, Schuster DM, Naomi Alazraki N: A Clinicians' Guide to Nuclear Medicine, 2nd edition. Society of 
Nuclear Medicine, 2000. ISBN 978-0932004727 

• Mark J. Shumate MJ, Kooby DA, Alazraki NP: A Clinician's Guide to Nuclear Oncology: Practical Molecular 
Imaging and Radionuclide Therapies. Society of Nuclear Medicine, January 2007. ISBN 978-0972647885 

• Ell P, Gambhir S: Nuclear Medicine in Clinical Diagnosis and Treatment. Churchill Livingstone, 2004. (1950 
pages) ISBN 978-0443073120 

Nuclear medicine 12 

External links 

• Society of Nuclear Medicine ( 

• Brochure: What is Nuclear Medicine? ( 

• Resource center: information about nuclear medicine (http://interactive.snm. org/index.cfm?PageID=6309& 

• International Atomic Energy Agency (IAEA), Division of Human Health, Nuclear Medicine (http:// asp) 

• RADAR Medical Procedure Radiation Dose Calculator and Consent Language Generator (http://www. 

• Association of Image Producers and Equipment Suppliers ( 


Radiobiology (or radiation biology) is the interdisciplinary field of science that studies the biological effects of 
ionizing and non-ionizing radiation of the whole electromagnetic spectrum, including radioactivity (alpha, beta and 
gamma), x-rays, ultraviolet radiation, visible light, microwaves, radio wave, low-frequency radiation (such as used in 
alternate electric transmission, ultrasound thermal radiation (heat), and related modalities. It is a subset of 

Areas of interest 

The interactions between electromagnetic fields (EMF) and organisms can be studied at several levels: 

radiation physics 

radiation chemistry 

molecular and cell biology 

molecular genetics 

cell death and apoptosis 

dose modifying agents 

protection and repair mechanisms 

tissue responses to radiation 

radio-adaptation of living organisms 

high and low-level electromagnetic radiation and health 

specific absorption rates of organisms 

radiation poisoning 

radiation oncology (radiation therapy in cancer) 

Radiobiology of non-ionizing radiation includes: 

B ioelectromagnetics 

Radiobiology 13 

Radiation sources for radiobiology 

Radiobiology experiments typically make use of a radiation source which could be: 

• An isotopic source, typically Cs or Co. 

• A particle accelerator generating high energy protons, electrons or charged ions. Biological samples can be 
irradiated using either a broad, uniform beam or using a microbeam, focused down to cellular or subcellular 

• A UV lamp. 

See also 



Nuclear medicine 

Radioactivity in biology 


Cell survival curve 

Relative biological effectiveness 

Health threat from cosmic rays 

Background radiation 


[1] Pattison, J. E., Hugtenburg, R. P., Beddoe, A. H. and Charles, M. W. (2001), Experimental Simulation of A-bomb Gamma-ray Spectra for 
Radiobiology Studies, Radiation Protection Dosimetry 95(2):125-136. 

References and further reading 

• WikiMindMap (http :// w w w . wikimindmap . org/ vie wmap . php ? wiki=en. wikipedia. org& topic=radiobiology ) 

• Eric Hall, Radiobiology for the Radiobiologist. 2006. Lippincott 

• G.Gordon Steel, "Basic Clinical Radiobiology". 2002. Hodder Arnold. 

• The Institute for Radiation Biology at the Helmholtz-Center for Environmental Health (http://www. 

External links 

• The Institute for Radiation Biology at the Helmholtz-Center for Environmental Health (http://www. 

Radiopharmacology 14 


Radiopharmacology is the study and preparation of radiopharmaceuticals, which are radioactive pharmaceuticals. 
Radiopharmaceuticals are used in the field of nuclear medicine as tracers in the diagnosis and treatment of many 
diseases. Many radiopharmaceuticals use technetium-99m (Tc-99m) which has many useful properties as a 
gamma-emitting tracer nuclide. In the book Technetium a total of 31 different radiopharmaceuticals based on 
Tc-99m are listed for imaging and functional studies of the brain, myocardium, thyroid, lungs, liver, gallbladder, 
kidneys, skeleton, blood and tumors. 

The term radioisotope has historically been used to refer to all radiopharmaceuticals, and this usage remains 
common. Technically, however, many radiopharmaceuticals incorporate a radioactive tracer atom into a larger 
pharmaceutically-active molecule, which is localized in the body, after which the radionuclide tracer atom allows it 
to be easily detected with a gamma camera or similar gamma imaging device. An example is fludeoxyglucose in 
which fluorine-18 is incorporated into deoxyglucose. Some radioisotopes (for example gallium-67, gallium-68, and 
radioiodine) are used directly as soluble ionic salts, without further modification. This use relies on the chemical and 
biological properties of the radioisotope itself, to localize it within the body. 


See nuclear medicine. 


Production of a radiopharmaceutical involves two processes: 

• The production of the radionuclide on which the pharmaceutical is based. 

• The preparation and packaging of the complete radiopharmaceutical. 

Radionuclides used in radiopharmaceuticals are mostly radioactive isotopes of elements with atomic numbers less 
than that of bismuth, that is, they are radioactive isotopes of elements that also have one or more stable isotopes. 
These may be roughly divided into two classes: 

• Those with fewer neutrons in the nucleus to those required for stability are known as neutron-deficient, and tend 
to be most easily produced using a proton accelerator such as a medical cyclotron. 

• Those with excess neutrons in the nucleus to those required for stability are known as proton-deficient, and tend 
to be most easily produced in a nuclear reactor. 

Practical use 

Because radiopharmeuticals require special licenses and handling techniques, they are often kept in local centers for 
medical radioisotope storage, often known as radiopharmacies. A radiopharmacist may dispense them from there, to 
local centers where they are handled at the nuclear medicine facility. 

Specific radiopharmaceuticals 

A list of nuclear medicine radiopharmaceuticals follows. Some radioisotopes* are used in ionic or inert form 
without attachment to a pharmaceutical, these are also included. There is a section for each radioisotope with a table 
of radiopharmaceuticals using that radioisotope. The sections are ordered alphabetically by the English name of the 
radioisotope. Sections for the same element are then ordered by atomic mass number. 




Ca is a beta and gamma emitter. 

Name Investigation 

2+ Bone metabolism 


Route of administration In-vitro I in-vivo Imaging / non-imaging 

IV In-vitro Non-imaging 

Carbon- 11 

C is a positron emitter. 



Route of administration 

In-vitro 1 in-vivo 

Imaging / non-imaging 

CI 1-L-methyl-methionine 

Brain tumour 
Parathyroid imaging 






C is a beta emitter. 



Route of 

In-vitro 1 

Imaging / 




C 14-Glycocholic acid 

Breath test for small intestine 
bacterial overgrowth 




C14-PABA (para-amino 

Pancreatic studies 




benzoic acid) 


Breath test to detect Helicobacter 





Breath test for small intestine 
bacterial overgrowth 






Cr is a gamma emitter. 



Route of 

In-vitro 1 

Imaging / 

Cr51-Red blood cells 

Red cell volume; sites of 
sequestration; gastrointestinal 
blood loss 




Cr51-Cr 3+ 

Gastrointestinal protein loss 




(ethylenediaminetetraacetic acid) 

Glomerular filtration rate 








Co is a gamma emitter. 



Route of 

In-vitro 1 

Imaging / 




Co57-Cyanocobalamin (vitamin 









Co is a gamma emitter. 



Route of 

In-vitro I 

Imaging / 

Co58-Cyanocobalamin (vitamin Gastrointestinal 
B ) absorption 





Er is a beta emitter. 


Treatment of Route of administration 

Erl69-Colloid Arthritic conditions Intra-articular 

Fluorine- 18 

1 R 

F is a positron emitter with a half life of 109 minutes. It is produced in medical cyclotrons, usually from 
oxygen- 18, and then chemically attached to a pharmaceutical. See PET scan. 



Route of 

In-vitro 1 

Imaging / 



Tumor imaging 
Myocardial imaging 





Bone imaging 




F 1 8 -Fluorocholine 

Prostate tumor 





Ga is a gamma emitter. See gallium scan. 





Route of administration 

In-vitro 1 in-vivo 

Imaging / non-imaging 

Ga67-Ga 3+ 

Tumor imaging 




Ga67-Ga 3+ 

Infection/inflammation imaging 





Ga is a positron emitter, with a 68 minute half life, produced by elution from germanium-68 in a gallium-68 
generator. See also positron emission tomography. 



Route of 

In-vitro 1 

Imaging / 




Ga68-Dotatoc or 

Neuroendocrine tumor 







H or tritium is a beta emitter. 

Name Investigation 

H3-water Total body water 

Route of administration In-vitro I in-vivo Imaging / non-imaging 

Oral or IV In-vitro Non-imaging 



In is a gamma emitter. 



Route of 

In-vitro 1 

Imaging / 


(diethylenetriaminepenta-acetic acid) 

GI transit 





(diethylenetriaminepenta-acetic acid) 










Inl 11 -Platelets 

Thrombus imaging 




Inl 1 1-Pentetreotide 

Somatostatin receptor imaging 





Somatostatin receptor imaging 






I is a gamma emitter. It is used only diagnostically, as its radiation is penetrating and short-lived. 





Route of 

In-vitro 1 

Imaging / 


Thyroid uptake 

Oral or IV 




Thyroid imaging 

Thyroid metastases imaging 

Oral or IV 




Renal imaging 






Neuroectodermal tumour 





SPECT imaging of 
Parkinson's Disease 






I is a gamma emitter with a long half-life of 59.4 days (the longest of all radioiodines used in medicine). 
Iodine- 123 is preferred for imaging, so 1-125 is used diagnostically only when the test requires a longer period to 
prepare the radiopharmaceutical and trace it, such as a fibrinogen scan to diagnose clotting. I-125's gamma radiation 
is of medium penetration, making it more useful as a therapeutic isotope for brachytherapy implant of radioisotope 
capsules for local treatment of cancers. 



1125-fibrinogen Clot imaging 

Route of administration In-vitro I in-vivo 



Imaging / non-imaging 




I is a beta and gamma emitter. It is used both to destroy thyroid and thyroid cancer tissues (via beta radiation, 
which is short-range), and also other neuroendocrine tissues when used in MIBG. It can also be seen by a gamma 
camera, and can serve as a diagnostic imaging tracer, when treatment is also being attempted at the same time. 
However iodine- 123 is usually preferred when only imaging is desired. 




Route of 

In-vitro 1 

Imaging / 


Thyroid uptake 





Thyroid metastases 

Oral or IV 





Neuroectodermal tumor 









Treatment of 


Route of administration 

IV or Oral 


Non-toxic goiter 
Thyroid carcinoma 

IV or Oral 
IV or Oral 

113 1 -MIBG (m-iodobenzylguanidine) Malignant disease 



Fe is a beta and gamma emitter. 

Name Investigation 

Fe59-Fe 2+ or Fe 3+ Iron metab °l ism 

Route of administration In-vitro I in-vivo Imaging / non-imaging 

IV In-vitro Non-imaging 


Kr is a gamma emitter. 



Route of administration 

In-vitro 1 

Imaging / non-imaging 


Lung ventilation imaging 




Kr-81m- Aqueous solution 

Lung perfusion imaging 






N is a positron emitter. 



Route of administration 

In-vitro 1 in-vivo 

Imaging / non-imaging 

N 13- Ammonia 

Myocardial blood flow imaging 





O is a positron emitter. 



Route of administration 

In-vitro 1 in-vivo 

Imaging / non-imaging 


Cerebral blood flow imaging 
Myocardial blood flow imaging 

IV bolus 






P is a beta emitter. 

Name Treatment of Route of administration 

P32-Phosphate Polycythemia and related disorders IV or Oral 



Sm is a beta and gamma emitter. 


Treatment of Route of administration 

Sml53-EDTMP (Ethylenediaminotetramethylenephosphoric acid) 

Bone metastases 




Se is a gamma emitter. 



Route of 

In-vitro 1 

Imaging / 


Adrenal gland 





Bile salt 






Na is a positron and gamma emitter. 

Name Investigation 

Route of administration In-vitro I in-vivo 

Imaging / non-imaging 

Na22-Na + Electrolyte studies 

Oral or IV 





Na is a beta and gamma emitter. 

Name Investigation 

Route of administration In-vitro I in-vivo 

Imaging / non-imaging 

Na24-Na + Electrolyte studies 

Oral or IV 





Strontium- 89 

Sr is a beta emitter. 



Treatment of 

Bone metastases 

Route of administration 




Tc is a gamma emitter. It is obtained on-site at the imaging center as the soluble pertechnetate 
from a technetium-99m generator, and then either used directly as this soluble salt, or else used 
number of technetium-99m-based radiopharmaceuticals. 

which is eluted 
to synthesize a 



Route of 

In-vitro 1 

Imaging / 





Thyroid uptake and thyroid 


Stomach and salivary 

gland imaging 

Meckel's diverticulum 


Brain imaging 

Micturating cystogram 

First pass blood flow 


First pass peripheral 

vascular imaging 





Lacrimal imaging 

Eye drops 



Tc99m-Human albumin 

Cardiac blood pool 




Tc99m-Human albumin 

Peripheral vascular 




Tc99m-Human albumin macroaggregates or 

Lung perfusion imaging 





Tc99m-Human albumin macroaggregates or 

Lung perfusion imaging 





with venography 

Tc99m-Phosphonates and phosphates 

Bone imaging 




Tc99m-Phosphonates and phosphates 

Myocardial imaging 




Tc99m-DTPA (diethylenetriaminepenta-acetic 

Renal imaging 





First pass blood flow 


Brain imaging 

Tc99m-DTPA (diethylenetriaminepenta-acetic 

Lung ventilation imaging 






Tc99m-DMSA(V) (dimercaptosuccinic acid) 

Tumor imaging 




Tc99m-DMSA(III) (dimercaptosuccinic acid) 

Renal imaging 





Bone marrow imaging 
GI Bleeding 





Lymph node imaging 







Esophageal transit and 
reflux imaging 





Lacrimal imaging 

Eye drops 



Tc99m-HIDA (Hepatic iminodiacetic acid) 

Functional biliary system 




Tc99m-Denatured red bood cells 

Red cell volume 




Tc99m-Red blood cells 

GI bleeding 
Cardiac blood pool 

Peripheral vascular 




Tc99m-MAG3 (mercaptoacetyltriglycine) 

Renal imaging 
First pass blood flow 





Cerebral blood flow 




Tc99m-Exametazime labelled leucocytes 





Tc99m-Sestamibi (MIBI - methoxy isobutyl 

Parathyroid imaging 





Non-specific tumor 


Thyroid tumor imaging 

Breast imaging 

Myocardial imaging 

Tc99m-Sulesomab (IMMU-MN3 murine 





Fab'-SH antigranulocyte monoclonal antibody 




Lung ventilation imaging 




Tc99m-Human immunoglobulin 






Parathyroid imaging 
Myocardial imaging 




Tc99m-ECD (ethyl cysteinate dimer) 

Brain imaging 





201™ . 

Tl is a gamma emitter. 



Route of administration 

In-vitro 1 in-vivo 

Imaging / non 


T1201-T1" 1 " 

Non-specific tumor 


Thyroid tumor imaging 

Myocardial imaging 

Parathyroid imaging 








Xe is a gamma emitter. 



Route of 

In-vitro 1 

Imaging / 





Lung ventilation 




Xel33 in isotonic sodium chloride 

Cerebral blood flow 






Y is a beta emitter. 

Name Treatment of Route of administration 

Y90-Silicate Arthritic conditions Intra-articular 

Y90-Silicate Malignant disease Intracavitary 

See also 

Radioactive tracer 


[1] Schwochau, Klaus. Technetium. Wiley-VCH (2000). ISBN 3-527-29496-1 


• Notes for guidance on the clinical administration of radiopharmaceuticals and use of sealed radioactive sources. 
Administration of radioactive substances advisory committee. March 2006. Produced by the Health Protection 

• Malabsorption ( 1 1/chl 1 la.jsp). In: The Merck Manual 
of Geriatrics, chapter 111. 

• Leukoscan summary of product characteristics ( 
leukoscan/H-1 1 1-PI-en.pdf) (Tc99m-Sulesomab). 

• Schwochau, Klaus. Technetium. Wiley-VCH (2000). ISBN 3-527-29496-1 

Medical diagnosis 24 

Medical diagnosis 

Medical diagnosis refers both to the process of attempting to determine the identity of a possible disease or disorder 
and to the opinion reached by this process. 

The term diagnostic criteria designates the combination of signs, symptoms, and test results that the clinician uses to 
attempt to determine the correct diagnosis. The plural of diagnosis is diagnoses, the verb is to diagnose, and a person 
who diagnoses is called a diagnostician. The word diagnosis (English pronunciation: /'nousts/) is derived through 
Latin from the Greek word Sloylyvcookelv, meaning to discern or distinguish. This Greek word is formed from 
5lo, meaning apart, and yLyvcooKELV, meaning to learn. 


Typically, a person with abnormal symptoms will consult a health care provider such as a physician, podiatrist, nurse 
practitioner, physical therapist or physicians assistant, who will then obtain a medical history of the patient's illness 
and perform a physical examination for signs of disease. The provider will formulate a hypothesis of likely 
diagnoses and in many cases will obtain further testing to confirm or clarify the diagnosis before providing 

Medical tests commonly performed are measuring blood pressure, checking the pulse rate, listening to the heart with 
a stethoscope, urine tests, fecal tests, saliva tests, blood tests, medical imaging, electrocardiogram, hydrogen breath 
test and occasionally biopsy. 

For instance, a common disorder such as pneumonia was nevertheless used as a diagnosis before the germ theory 
was accepted, and the disease was defined as a complex of many symptoms consisting of cough, sputum production, 
fever and chills. Later, as the actual cause was assigned to micro-organisms, the term diagnosis included the 
causality, e.g., pneumococcal pneumonia, suggesting not only a spectrum of symptoms but also a cause for the 

Advances in medicine could be described as a shift from definition #1 to definition #2 as scientific causalities were 
discovered. This differentiation of the term diagnosis is critically important because widespread disagreement exists 
between medical and psychiatric practitioners as to whether causalities for various diseases and disorders are known 
or not. If causalities are assumed to be known, then authentic cures can be obtained by correcting the causal 
abnormalities. If causalities are assumed to be unknown, then palliative treatments to reduce symptoms are the best 
treatments possible. 

Diagnosis in medical practice 

A provider's job is to know the human body and its functions in terms of normality (homeostasis). The four 
cornerstones of diagnostic medicine, each essential for understanding homeostasis, are: anatomy (the structure of the 
human body), physiology (how the body works), pathology (what can go wrong with the anatomy and physiology) 
and psychology (thought and behavior). Once the provider knows what is normal and can measure the patient's 
current condition against those norms, she or he can then determine the patient's particular departure from 
homeostasis and the degree of departure. This is called the diagnosis. Once a diagnosis has been reached, the 
provider is able to propose a management plan, which will include treatment as well as plans for follow-up. From 
this point on, in addition to treating the patient's condition, the provider educates the patient about the causes, 
progression, outcomes, and possible treatments of his ailments, as well as providing advice for maintaining health. 

It should be noted however, that medical diagnosis in psychology or psychiatry is problematic. Apart from the fact 
that there are differing theoretical views toward mental conditions and that there are few "lab" tests available for 
various major disorders (e.g., clinical depression), a causal analysis with respect to symptomatology and 
disorder/disease is not always possible. As a result, most if not all mental conditions, function as both symptoms as 

Medical diagnosis 25 

well as disorders. There are often functional descriptions provided for psychological disorders and these are 
vulnerable to circular reasoning due to the etiological fuzziness inherent of these diagnostic categories. (BDG, 2006) 

Diagnostic procedure 

The diagnostic process is fluid in which the provider gathers information from the patient and others, from a physical 
examination of the patient, and from medical tests performed upon the patient. 


There are a number of techniques used by providers to obtain a correct diagnosis : 
exhaustive method 

every possible question is asked and all possible data is collected, 
algorithmic method 

the provider follows the steps of a proven strategy, 
pattern-recognition method 

the provider uses experience to recognise a pattern of clinical characteristics, 
differential diagnosis 

the provider uses the hypothetico-deductive method, a systematic, problem-focused method of inquiry. 

The advanced clinician uses a combination of the pattern-recognition and hypothetico-deductive approaches. 

The presence of some medical conditions cannot be established with complete confidence from examination or 
testing. Diagnosis is therefore by elimination of other reasonable possibilities, referred to as the diagnosis of 

The provider should consider the patient in their 'well' context rather than simply as a walking medical condition. 
This entails assessing the socio-political context of the patient (family, work, stress, beliefs), in addition to the 
patient's physical body, as this often offers vital clues to the patient's condition and its management. 

The process of diagnosis begins when the patient consults the provider and presents a set of complaints (symptoms). 
If the patient is unconscious, this condition is the de facto complaint. The provider then obtains further information 
from the patient and from those who know him or her, if present, about the patient's symptoms, their previous state 
of health, living conditions, and so forth. 

Rather than consider the myriad diseases that could afflict the patient, the provider narrows down the possibilities to 
their illnesses likely to account for the apparent symptoms, making a list of only those disease (conditions) that could 
account for what is wrong with the patient. These are generally ranked in order of probability. 

The provider then conducts a physical examination of the patient, studies the patient's medical record, and asks 
further questions in an effort to rule out as many of the potential conditions as possible. When the list is narrowed 
down to a single condition, this is called the differential diagnosis and provides the basis for a hypothesis of what is 
ailing the patient. 

Unless the provider is certain of the condition present, further medical tests are performed or scheduled such as 
medical imaging, in part to confirm or disprove the diagnosis but also to document the patient's status to keep the 
patient's medical history up to date. Consultations with other providers and specialists in the field may be sought. If 
unexpected findings are made during this process, the initial hypothesis may be ruled out and the provider must then 
consider other hypotheses. 

Despite all of these complexities, most patient consultations are relatively brief, because many diseases are obvious, 
or the providers experience may enable him or her to recognize the condition quickly. Another factor is that the 
decision tree is used for most diagnostic hypothesis testing are relatively short. 

Once the provider has completed the diagnosis, the prognosis is explained to the patient and a treatment plan is 
proposed which includes therapy and follow-up consultations and tests to monitor the condition and the progress of 

Medical diagnosis 26 

the treatment, if needed, usually according to the medical guideline provided by the medical field on the treatment of 
the particular illness. 

Treatment itself may indicate a need for review of the diagnosis if there is a failure to respond to treatments that 
would normally work. 

A laboratory diagnosis is either a substitution or complement to the diagnosis made by examination of the patient. 
For instance, a proper diagnosis of infectious diseases usually requires both an examination of symptoms, as well as 
laboratory characteristics of the pathogen involved. 

Diagnostic tests 

A diagnostic test is any kind of medical test performed to aid in the diagnosis or detection of disease. The possible 
benefits of a diagnostic test must be weighed against the costs of unnecessary tests and resulting unnecessary 
follow-up and possibly even unnecessary treatment of incidental findings. 

Diagnostic tests can have psychological effects on the patient that increase or reduce the symptoms. 


Overdiagnosis is the diagnosis of "disease" that will never cause symptoms or death during a patient's lifetime. It is a 
problem because it turns people into patients unnecessarily and because it leads to treatments that can only cause 
harm. Overdiagnosis occurs when a disease is diagnosed correctly, but the diagnosis is irrelevant. A correct 
diagnosis may be irrelevant because treatment for the disease is not available, not needed, or not wanted. 

Errors in diagnosis 

Causes of error in diagnosis are: 

• the manifestation of disease are not sufficiently noticeable 

• a disease is omitted from consideration 

• too much significance is given to some aspect of the diagnosis 


Clinical decision support systems are interactive computer programs designed to assist health professionals with 
decision-making tasks. The clinician interacts with the software utilizing both the clinician's knowledge and the 
software to make a better analysis of the patients data than either human or software could make on their own. 
Typically the system makes suggestions for the clinician to look through and the clinician picks useful information 
and removes erroneous suggestions. 


The history of medical diagnosis began in earnest from the days of Imhotep in ancient Egypt and Hippocrates in 
ancient Greece. In Traditional Chinese Medicine, there are four diagnostic methods: inspection, 
auscultation-olfaction, interrogation, and palpation. A Babylonian medical textbook, the Diagnostic Handbook 
written by Esagil-kin-apli (fl. 1069-1046 BC), introduced the use of empiricism, logic and rationality in the diagnosis 
of an illness or disease. The book made use of logical rules in combining observed symptoms on the body of a 

patient with its diagnosis and prognosis. Esagil-kin-apli described the symptoms for many varieties of epilepsy 

and related ailments along with their diagnosis and prognosis. 

The practice of diagnosis continues to be dominated by theories set down in the early 20th century. 

Medical diagnosis 

See also 


Diagnosis codes 

Diagnosis of exclusion 

Diagnosis-related group 

Diagnostic and Statistical Manual of Mental Disorders 

Doctor-patient relationship 


International Statistical Classification of 

Diseases and Related Health Problems (ICD) 

Medical classification 

Merck Manual of Diagnosis and Therapy 

Misdiagnosis and medical error 


Nursing diagnosis 




Preimplantation genetic diagnosis 

Prenatal diagnosis 

Remote diagnostics 
Self diagnosis 
Trashcan diagnosis 


List of diseases 
List of disorders 

List of medical symptoms 
Category :Diseases 


[I] "Online Etymology Dictionary" (http://www.etymonline. com/index. php?term=diagnosis). . 

[2] Making a diagnosis, John P. Langlois, Chapter 10 in Fundamentals of clinical practice (2002). Mark B. Mengel, Warren Lee Holleman, Scott 

A. Fields. 2nd edition, p.198. ISBN 0-306-46692-9 
[3] p.204ibid. 
[4] Jarvik J, Hollingworth W, Martin B, Emerson S, Gray D, Overman S, Robinson D, Staiger T, Wessbecher F, Sullivan S, Kreuter W, Deyo R 

(2003). "Rapid magnetic resonance imaging vs radiographs for patients with low back pain: a randomized controlled trial". JAMA 289 (21): 

2810-8. doi:10.1001/jama.289.21.2810. PMID 12783911. 
[5] Sox H, Margulies I, Sox C (1981). "Psychologically mediated effects of diagnostic tests". Ann Intern Med 95 (6): 680-5. PMID 7305144. 
[6] Petrie K, Muller J, Schirmbeck F, Donkin L, Broadbent E, Ellis C, Gamble G, Rief W (2007). "Effect of providing information about normal 

test results on patients' reassurance: randomised controlled trial". BM7 334: 352. doi:10.1136/bmj.39093.464190.55. PMID 17259186. 
[7] doi:10.1207/sl5516709cog0503_3 

[8] Decision support systems. 26 July 2005. 17 Feb. 2009 <> 
[9] Four diagnostic methods of traditional Chinese medicine ( 

[10] H. F. J. Horstmanshoff, Marten Stol, Cornells Tilburg (2004), Magic and Rationality in Ancient Near Eastern and Graeco-Roman Medicine, 

p. 97-98, Brill Publishers, ISBN 90-04-13666-5. 

[II] H. F. J. Horstmanshoff, Marten Stol, Cornells Tilburg (2004), Magic and Rationality in Ancient Near Eastern and Graeco-Roman Medicine, 
p. 99, Brill Publishers, ISBN 90-04-13666-5. 

[12] Marten Stol (1993), Epilepsy in Babylonia, p. 5, Brill Publishers, ISBN 90-72371-63-1. 

External links 

• The Merck Manuals Online Medical Library ( 


Medical Imaging 

Medical Imaging Techniques 

Medical imaging is the technique and process used to create images of the human body (or parts and function 
thereof) for clinical purposes (medical procedures seeking to reveal, diagnose or examine disease) or medical science 
(including the study of normal anatomy and physiology). Although imaging of removed organs and tissues can be 
performed for medical reasons, such procedures are not usually referred to as medical imaging, but rather are a part 
of pathology. 

As a discipline and in its widest sense, it is part of biological imaging and incorporates radiology (in the wider 
sense), nuclear medicine, investigative radiological sciences, endoscopy, (medical) thermography, medical 
photography and microscopy (e.g. for human pathological investigations). 

Measurement and recording techniques which are not primarily designed to produce images, such as 
electroencephalography (EEG), magnetoencephalography (MEG), Electrocardiography (EKG) and others, but which 
produce data susceptible to be represented as maps (i.e. containing positional information), can be seen as forms of 
medical imaging. 


In the clinical context, "invisible light" medical imaging is generally equated to radiology or "clinical imaging" and 
the medical practitioner responsible for interpreting (and sometimes acquiring) the images is a radiologist. "Visible 
light" medical imaging involves digital video or still pictures that can be seen without special equipment. 
Dermatology and wound care are two modalities that utilize visible light imagery. Diagnostic radiography designates 
the technical aspects of medical imaging and in particular the acquisition of medical images. The radiographer or 
radiologic technologist is usually responsible for acquiring medical images of diagnostic quality, although some 
radiological interventions are performed by radiologists. While radiology is an evaluation of anatomy, nuclear 
medicine provides functional assessment. 

As a field of scientific investigation, medical imaging constitutes a sub-discipline of biomedical engineering, 
medical physics or medicine depending on the context: Research and development in the area of instrumentation, 
image acquisition (e.g. radiography), modelling and quantification are usually the preserve of biomedical 
engineering, medical physics and computer science; Research into the application and interpretation of medical 
images is usually the preserve of radiology and the medical sub-discipline relevant to medical condition or area of 
medical science (neuroscience, cardiology, psychiatry, psychology, etc.) under investigation. Many of the techniques 
developed for medical imaging also have scientific and industrial applications. 

Medical imaging is often perceived to designate the set of techniques that noninvasively produce images of the 
internal aspect of the body. In this restricted sense, medical imaging can be seen as the solution of mathematical 
inverse problems. This means that cause (the properties of living tissue) is inferred from effect (the observed signal). 
In the case of ultrasonography the probe consists of ultrasonic pressure waves and echoes inside the tissue show the 
internal structure. In the case of projection radiography, the probe is X-ray radiation which is absorbed at different 
rates in different tissue types such as bone, muscle and fat. 

The term noninvasive is a term based on the fact that following medical imaging modalities do not penetrate the skin 
physically. But on the electromagnetic and radiation level, they are quite invasive. From the high energy photons in 
X-Ray Computed Tomography, to the 2+ Tesla coils of an MRI device, these modalities alter the physical and 
chemical reactions of the body in order to obtain data. 

Medical Imaging Techniques 



Two forms of radiographic images are in use in medical imaging; projection radiography and fluoroscopy, with the 
latter being useful for intraoperative and catheter guidance. These 2D techniques are still in wide use despite the 
advance of 3D tomography due to the low cost, high resolution, and depending on application, lower radiation 
dosages. This imaging modality utilizes a wide beam of x rays for image acquisition and is the first imaging 
technique available in modern medicine. 

• Fluoroscopy produces real-time images of internal structures of the body in a similar fashion to radiography, but 
employs a constant input of x-rays, at a lower dose rate. Contrast media, such as barium, iodine, and air are used 
to visualize internal organs as they work. Fluoroscopy is also used in image-guided procedures when constant 
feedback during a procedure is required. An image receptor is required to convert the radiation into an image after 
it has passed through the area of interest. Early on this was a fluorescing screen, which gave way to an Image 
Amplifier (IA) which was a large vacuum tube that had the receiving end coated with cesium iodide, and a mirror 
at the opposite end. Eventually the mirror was replaced with a TV camera. 

• Projectional radiographs, more commonly known as x-rays, are often used to determine the type and extent of a 
fracture as well as for detecting pathological changes in the lungs. With the use of radio-opaque contrast media, 
such as barium, they can also be used to visualize the structure of the stomach and intestines - this can help 
diagnose ulcers or certain types of colon cancer. 

Magnetic resonance imaging (MRI) 

A magnetic resonance imaging instrument (MRI scanner), or "nuclear 
magnetic resonance (NMR) imaging" scanner as it was originally 
known, uses powerful magnets to polarise and excite hydrogen nuclei 
(single proton) in water molecules in human tissue, producing a 
detectable signal which is spatially encoded, resulting in images of the 
body. MRI uses three electromagnetic fields: a very strong (on the 
order of units of teslas) static magnetic field to polarize the hydrogen 
nuclei, called the static field; a weaker time-varying (on the order of 
1 kHz) field(s) for spatial encoding, called the gradient field(s); and a 
weak radio-frequency (RF) field for manipulation of the hydrogen 
nuclei to produce measurable signals, collected through an RF antenna. 

Like CT, MRI traditionally creates a two dimensional image of a thin 

"slice" of the body and is therefore considered a tomographic imaging 

technique. Modern MRI instruments are capable of producing images 

in the form of 3D blocks, which may be considered a generalisation of 

the single-slice, tomographic, concept. Unlike CT, MRI does not 

involve the use of ionizing radiation and is therefore not associated 

with the same health hazards. For example, because MRI has only been in use since the early 1980s, there are no 

known long-term effects of exposure to strong static fields (this is the subject of some debate; see 'Safety' in MRI) 

and therefore there is no limit to the number of scans to which an individual can be subjected, in contrast with X-ray 

and CT. However, there are well-identified health risks associated with tissue heating from exposure to the RF field 

and the presence of implanted devices in the body, such as pace makers. These risks are strictly controlled as part of 

the design of the instrument and the scanning protocols used. 

Because CT and MRI are sensitive to different tissue properties, the appearance of the images obtained with the two 
techniques differ markedly. In CT, X-rays must be blocked by some form of dense tissue to create an image, so the 
image quality when looking at soft tissues will be poor. In MRI, while any nucleus with a net nuclear spin can be 

A brain MRI representation 

Medical Imaging Techniques 30 

used, the proton of the hydrogen atom remains the most widely used, especially in the clinical setting, because it is 
so ubiquitous and returns a large signal. This nucleus, present in water molecules, allows the excellent soft-tissue 
contrast achievable with MRI. 

Nuclear medicine 

Nuclear medicine encompasses both diagnostic imaging and treatment of disease, and may also be referred to as 
molecular medicine or molecular imaging & therapeutics . Nuclear medicine uses certain properties of isotopes 
and the energetic particles emitted from radioactive material to diagnose or treat various pathology. Different from 
the typical concept of anatomic radiology, nuclear medicine enables assessment of physiology. This function-based 
approach to medical evaluation has useful applications in most subspecialties, notably oncology, neurology, and 

cardiology. Gamma cameras are used in e.g. scintigraphy, SPECT and PET to detect regions of biologic activity that 

may be associated with disease. Relatively short lived isotope, such as I is administered to the patient. Isotopes 

are often preferentially absorbed by biologically active tissue in the body, and can be used to identify tumors or 

fracture points in bone. Images are acquired after collimated photons are detected by a crystal that gives off a light 

signal, which is in turn amplified and converted into count data. 

• Scintigraphy ("scint") is a form of diagnostic test wherein radioisotopes are taken internally, for example 
intravenously or orally. Then, gamma camera capture and form two-dimensional images from the radiation 
emitted by the radiopharmaceuticals. For example, technetium-labeled isoniazid (INH) and ethambutol (EMB) 
has been used for tubercular imaging for early diagnosis of tuberculosis 

• SPECT is a 3D tomographic technique that uses gamma camera data from many projections and can be 
reconstructed in different planes. A dual detector head gamma camera combined with a CT scanner, which 
provides localization of functional SPECT data, is termed a SPECT/CT camera, and has shown utility in 
advancing the field of molecular imaging. In most other medical imaging modalities, energy is passed through the 
body and the reaction or result is read by detectors. In SPECT imaging, the patient is injected with a radioisotope, 
most commonly Thallium 201TI, Technetium 99mTC, Iodine 1231, and Gallium 68Ga 

. The radioactive gamma rays are emitted through the body as the natural decaying process of these isotopes takes 
place. The emissions of the gamma rays are captured by detectors that surround the body. This essentially means that 
the human is now the source of the radioactivity, rather than the medical imaging devices such as X-Ray, CT, or 

• Positron emission tomography (PET) uses coincidence detection to image functional processes. Short-lived 

1 8 

positron emitting isotope, such as F, is incorporated with an organic substance such as glucose, creating 
F18-fluorodeoxyglucose, which can be used as a marker of metabolic utilization. Images of activity distribution 
throughout the body can show rapidly growing tissue, like tumor, metastasis, or infection. PET images can be 
viewed in comparison to computed tomography scans to determine an anatomic correlate. Modern scanners 
combine PET with a CT, or even MRI, to optimize the image reconstruction involved with positron imaging. This 
is performed on the same equipment without physically moving the patient off of the gantry. The resultant hybrid 
of functional and anatomic imaging information is a useful tool in non-invasive diagnosis and patient 

Photoacoustic imaging 

Photoacoustic imaging is a recently developed hybrid biomedical imaging modality based on the photoacoustic 
effect. It combines the advantages of optical absorption contrast with ultrasonic spatial resolution for deep imaging 
in (optical) diffusive or quasi-diffusive regime. Recent studies have shown that photoacoustic imaging can be used in 
vivo for tumor angiogenesis monitoring, blood oxygenation mapping, functional brain imaging, and skin melanoma 
detection, etc. 

Medical Imaging Techniques 3 1 

Breast Thermography 

Digital infrared imaging thermography is based on the principle that metabolic activity and vascular circulation in 
both pre-cancerous tissue and the area surrounding a developing breast cancer is almost always higher than in normal 
breast tissue. Cancerous tumors require an ever-increasing supply of nutrients and therefore increase circulation to 
their cells by holding open existing blood vessels, opening dormant vessels, and creating new ones 
(neoangiogenesis). This process frequently results in an increase in regional surface temperatures of the breast. 
Digital infrared imaging uses extremely sensitive medical infrared cameras and sophisticated computers to detect, 
analyze, and produce high-resolution diagnostic images of these temperature variations. Because of DII's sensitivity, 
these temperature variations may be among the earliest signs of breast cancer and/or a pre-cancerous state of the 
breast . 


Tomography is the method of imaging a single plane, or slice, of an object resulting in a tomogram. There are 
several forms of tomography: 

• Linear tomography: This is the most basic form of tomography. The X-ray tube moved from point "A" to point 
"B" above the patient, while the cassette holder (or "bucky") moves simultaneously under the patient from point 
"B" to point "A." The fulcrum, or pivot point, is set to the area of interest. In this manner, the points above and 
below the focal plane are blurred out, just as the background is blurred when panning a camera during exposure. 
No longer carried out and replaced by computed tomography. 

• Poly tomography: This was a complex form of tomography. With this technique, a number of geometrical 
movements were programmed, such as hypocycloidic, circular, figure 8, and elliptical. Philips Medical Systems 
[6] produced one such device called the 'Poly tome.' This unit was still in use into the 1990s, as its resulting 
images for small or difficult physiology, such as the inner ear, was still difficult to image with CTs at that time. 
As the resolution of CTs got better, this procedure was taken over by the CT. 

• Zonography: This is a variant of linear tomography, where a limited arc of movement is used. It is still used in 
some centres for visualising the kidney during an intravenous urogram (IVU). 

• Orthopantomography (OPT or OPG): The only common tomographic examination in use. This makes use of a 
complex movement to allow the radiographic examination of the mandible, as if it were a flat bone. It is often 
referred to as a "Panorex", but this is incorrect, as it is a trademark of a specific company. 

• Computed Tomography (CT), or Computed Axial Tomography (CAT: A CT scan, also known as a CAT scan), is 

a helical tomography (latest generation), which traditionally produces a 2D image of the structures in a thin 

section of the body. It uses X-rays. It has a greater ionizing radiation dose burden than projection radiography; 

repeated scans must be limited to avoid health effects. CT is based on the same principles as X-Ray projections 

but in this case, the patient is enclosed in a surrounding ring of detectors assigned with 500-1000 scintillation 


. This being the fourth-generation X-Ray CT scanner geometry. Previously in older generation scanners, the X-Ray 
beam was paired by a translating source and detector. 


Medical ultrasonography uses high frequency broadband sound waves in the megahertz range that are reflected by 
tissue to varying degrees to produce (up to 3D) images. This is commonly associated with imaging the fetus in 
pregnant women. Uses of ultrasound are much broader, however. Other important uses include imaging the 
abdominal organs, heart, breast, muscles, tendons, arteries and veins. While it may provide less anatomical detail 
than techniques such as CT or MRI, it has several advantages which make it ideal in numerous situations, in 
particular that it studies the function of moving structures in real-time, emits no ionizing radiation, and contains 
speckle that can be used in elastography. Ultrasound is also used as a popular research tool for capturing raw data, 

Medical Imaging Techniques 32 

that can be made available through an Ultrasound research interface, for the purpose of tissue characterization and 
implementation of new image processing techniques. The concepts of ultrasound differ from other medical imaging 
modalities in the fact that it is operated by the transmission and receipt of sound waves. The high frequency sound 
waves are sent into the tissue and depending on the composition of the different tissues; the signal will be attenuated 
and returned at separate intervals. A path of reflected sound waves in a multilayered structure can be defined by an 
input acoustic impedance( Ultrasound sound wave) and the Reflection and transmission coefficients of the relative 
structures . It is very safe to use and does not appear to cause any adverse effects, although information on this is 
not well documented. It is also relatively inexpensive and quick to perform. Ultrasound scanners can be taken to 
critically ill patients in intensive care units, avoiding the danger caused while moving the patient to the radiology 
department. The real time moving image obtained can be used to guide drainage and biopsy procedures. Doppler 
capabilities on modern scanners allow the blood flow in arteries and veins to be assessed. 

Medical imaging topics 
Maximizing imaging procedure use 

The amount of data obtained in a single MR or CT scan is very extensive. Some of the data that radiologists discard 
could save patients time and money, while reducing their exposure to radiation and risk of complications from 
invasive procedures. 

Creation of three-dimensional images 

Recently, techniques have been developed to enable CT, MRI and ultrasound scanning software to produce 3D 


images for the physician. Traditionally CT and MRI scans produced 2D static output on film. To produce 3D 
images, many scans are made, then combined by computers to produce a 3D model, which can then be manipulated 
by the physician. 3D ultrasounds are produced using a somewhat similar technique. In diagnosing disease of the 
viscera of abdomen,ultrasound is particularly sensitive on imaging of biliary tract,urinary tract and female 
reproductive organs(ovary,fallopian tubes). As for example,diagnosis of gall stone by dilatation of common bile duct 
and stone in common bile duct . With the ability to visualize important structures in great detail, 3D visualization 
methods are a valuable resource for the diagnosis and surgical treatment of many pathologies. It was a key resource 
for the famous, but ultimately unsuccessful attempt by Singaporean surgeons to separate Iranian twins Ladan and 
Laleh Bijani in 2003. The 3D equipment was used previously for similar operations with great success. 

Other proposed or developed techniques include: 

• Diffuse optical tomography 

• Elastography 

• Electrical impedance tomography 

• Optoacoustic imaging 

• Ophthalmology 

• A-scan 

• B-scan 

• Corneal topography 

• Optical coherence tomography 

• Scanning laser ophthalmoscopy 

Some of these techniques are still at a research stage and not yet used in clinical routines. 

Medical Imaging Techniques 33 

Compression of medical images 

Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET modalities. As a 
result, storage and communications of electronic image data are prohibitive without the use of compression. JPEG 
2000 is the state-of-the-art image compression DICOM standard for storage and transmission of medical images. 
The cost and feasibility of accessing large image data sets over low or various bandwidths are further addressed by 
use of another DICOM standard, called JPIP, to enable efficient streaming of the JPEG 2000 compressed image data. 

Non-diagnostic imaging 

Neuroimaging has also been used in experimental circumstances to allow people (especially disabled persons) to 
control outside devices, acting as a brain computer interface. 

Archiving and recording 

Used primarily in ultrasound imaging, capturing the image a medical imaging device is required for archiving and 
telemedicine applications. In most scenarios, a frame grabber is used in order to capture the video signal from the 
medical device and relay it to a computer for further processing and operations. 

Open source software for medical image analysis 

Several open source software packages are available for performing analysis of medical images: 


3D Slicer 





Free Surfer 

Use in pharmaceutical clinical trials 

Medical imaging has become a major tool in clinical trials since it enables rapid diagnosis with visualization and 
quantitative assessment. 

A typical clinical trial goes through multiple phases and can take up to eight years. Clinical endpoints or outcomes 
are used to determine whether the therapy is safe and effective. Once a patient reaches the endpoint, he/she is 
generally excluded from further experimental interaction. Trials that rely solely on clinical endpoints are very costly 
as they have long durations and tend to need large number of patients. 

In contrast to clinical endpoints, surrogate endpoints have been shown to cut down the time required to confirm 
whether a drug has clinical benefits. Imaging biomarkers (a characteristic that is objectively measured by an imaging 
technique, which is used as an indicator of pharmacological response to a therapy) and surrogate endpoints have 
shown to facilitate the use of small group sizes, obtaining quick results with good statistical power. 

Imaging is able to reveal subtle change that is indicative of the progression of therapy that may be missed out by 
more subjective, traditional approaches. Statistical bias is reduced as the findings are evaluated without any direct 
patient contact. 

For example, measurement of tumour shrinkage is a commonly used surrogate endpoint in solid tumour response 
evaluation. This allows for faster and more objective assessment of the effects of anticancer drugs. In evaluating the 
extent of Alzheimer's disease, it is still prevalent to use behavioural and cognitive tests. MRI scans on the entire 
brain can accurately pinpoint hippocampal atrophy rate while PET scans is able to measure the brain's metabolic 
activity by measuring regional glucose metabolism. 

Medical Imaging Techniques 


An imaging-based trial will usually be made up of three components: 

1. A realistic imaging protocol. The protocol is an outline that standardizes (as far as practically possible) the way in 
which the images are acquired using the various modalities (PET, SPECT, CT, MRI). It covers the specifics in 
which images are to be stored, processed and evaluated. 

2. An imaging centre that is responsible for collecting the images, perform quality control and provide tools for data 
storage, distribution and analysis. It is important for images acquired at different time points are displayed in a 
standardised format to maintain the reliability of the evaluation. Certain specialised imaging contract research 
organizations provide to end medical imaging services, from protocol design and site management through to data 
quality assurance and image analysis. 

3. Clinical sites that recruit patients to generate the images to send back to the imaging centre. 

See also 

Preclinical imaging 

Cardiac PET 

Biomedical informatics 

Digital Imaging and Communications in Medicine 

Digital Mammography and PACS 

EMMI European Master in Molecular Imaging 



Full-body scan 


Magnetic field imaging 
Medical examination 
Medical radiography 
Medical test 
Non-invasive (medical) 

JPEG 2000 compression 
JPIP streaming 


Radiology information system 

Segmentation (image processing) 

Signal-to-noise ratio 

Society for Imaging Science and Technology 




[1] Society of Nuclear Medicine ( 

[2] - scintigraphy ( Citing: Dorland's Medical Dictionary 

for Health Consumers, 2007 by Saunders; Saunders Comprehensive Veterinary Dictionary, 3 ed. 2007; McGraw-Hill Concise Dictionary of 

Modern Medicine, 2002 by The McGraw-Hill Companies 
[3] Singh, Namrata Singh. Clinical Evaluation of Radiolabeled Drugs for Tubercular Imaging. LAP Lambert Academic Publishing (2010). ISBN 

[4] Dhawan P, A. (2003). Medical Imaging Analysis. Hoboken, NJ: Wiley-Interscience Publication 
[7] Freiherr G. Waste not, want not: Getting the most from imaging procedures ( 

113619/1541872). Diagnostic Imaging. March 19, 2010. 
[8] Udupa, J.K. and Herman, G. T., 3D Imaging in Medicine, 2nd Edition, CRC Press, 2000 
[9] Treating Medical Ailments in Real Time ( 
[10] Hajnal, J. V., Hawkes, D. J., & Hill, D. L. (2001). Medical Image Registration. CRC Press. 

Further reading 

• Burger, Wilhelm; Burge, Mark James, eds (2008). Digital Image Processing: An Algorithmic Introduction using 
Java. Texts in Computer Science series. New York: Springer Science+Business Media. 

doi: 10. 1007/978-1-84628-968-2. ISBN 978-1-84628-379-6. 

• Baert, Albert L., ed (2008). Encyclopedia of Diagnostic Imaging. Berlin: Springer- Verlag. 
doi: 10. 1007/978-3-540-35280-8. ISBN 978-3-540-35278-5. 

• Tony F. Chan and Jackie Shen (2005). Image Processing and Analysis - Variational, PDE, Wavelet, and 
Stochastic Methods ( SIAM 

• Terry Yoo(Editor) (2004), Insight into Images. 

Medical Imaging Techniques 


• Robb, RA (1999). Biomedical Imaging, Visualization, and Analysis. John Wiley & Sons, Inc. ISBN 0471283533. 

• Journal of Digital Imaging (New York: Springer Science+Business Media). ISSN 0897-1889. 

• Using JPIP for Standard-Compliant Sharing of Medical Image Data ( 
whitepapers/wp jpipwado.htm) a white paper by Aware Inc. ( 

External links 

• Medical imaging (http://www.dmoz.Org/Health/Medicine/Imaging//) at the Open Directory Project 

• Medical Image Database ( Free Indexed Online Images 

• What is JPIP? 


Radiology is the branch of science where scientists use x-rays to see 
the inside of the human body from different rays. Radiologists utilize 
an array of imaging technologies (such as ultrasound, computed 
tomography (CT), nuclear medicine, positron emission tomography 
(PET) and magnetic resonance imaging (MRI)) to diagnose or treat 
diseases. Interventional radiology is the performance of (usually 
minimally invasive) medical procedures with the guidance of imaging 
technologies. The acquisition of medical imaging is usually carried out 
by the radiographer or radiologic technologist. 

Acquisition of radiological images 

The following imaging modalities are used in the field of diagnostic 

A radiologist interprets medical images on a 

modern Picture archiving and communication 

system (PACS) workstation. San Diego, CA, 


Projection (plain) radiography 

Radiographs (or roentgenographs, named after the discoverer of x-rays, 

Wilhelm Conrad Rontgen) are produced by the transmission of x-rays 

through a patient to a capture device then converted into an image for 

diagnosis. The original and still common imaging produces silver 

impregnated films. In Film-Screen radiography an x-ray tube generates 

a beam of x-rays which is aimed at the patient. The x-rays which pass 

through the patient are filtered to reduce scatter and noise and then 

strike an undeveloped film, held tight to a screen of light emitting 

phosphors in a light-tight cassette. The film is then developed 

chemically and an image appears on the film. Now replacing 

Film-Screen radiography is Digital Radiography, DR, in which x-rays 

strike a plate of sensors which then converts the signals generated into digital information and an image on computer 

screen. Plain radiography was the only imaging modality available during the first 50 years of radiology. It is still the 

first study ordered in evaluation of the lungs, heart and skeleton because of its wide availability, speed and relative 

low cost. 

Madura Foot X-Ray 




Fluoroscopy and angiography are special applications of X-ray imaging, in which a fluorescent screen and image 
intensifier tube is connected to a closed-circuit television system. ' This allows real-time imaging of structures in 
motion or augmented with a radiocontrast agent. Radiocontrast agents are administered, often swallowed or injected 
into the body of the patient, to delineate anatomy and functioning of the blood vessels, the genitourinary system or 
the gastrointestinal tract. Two radiocontrasts are presently in use. Barium (as BaSO ) may be given orally or rectally 
for evaluation of the GI tract. Iodine, in multiple proprietary forms, may be given by oral, rectal, intraarterial or 
intravenous routes. These radiocontrast agents strongly absorb or scatter X-ray radiation, and in conjunction with the 
real-time imaging allows demonstration of dynamic processes, such as peristalsis in the digestive tract or blood flow 
in arteries and veins. Iodine contrast may also be concentrated in abnormal areas more or less than in normal tissues 
and make abnormalities (tumors, cysts, inflammation) more conspicuous. Additionally, in specific circumstances air 
can be used as a contrast agent for the gastrointestinal system and carbon dioxide can be used as a contrast agent in 
the venous system; in these cases, the contrast agent attenuates the X-ray radiation less than the surrounding tissues. 

Interventional radiology 

Interventional radiology (abbreviated IR or sometimes VIR for vascular and interventional radiology, also 
known as Surgical Radiology or Image-Guided Surgery) is a subspecialty of radiology in which minimally invasive 
procedures are performed using image guidance. Some of these procedures are done for purely diagnostic purposes 
(e.g., angiogram), while others are done for treatment purposes (e.g., angioplasty). 

The basic concept behind interventional radiology is to diagnose or treat pathology with the most minimally invasive 
technique possible. Interventional radiologists diagnose and treat several disorders including peripheral vascular 
disease, renal artery stenosis, inferior vena cava filter placement, gastrostomy tube placements, biliary stents and 
hepatic interventions. Images are used for guidance and the primary instruments used during the procedure are 
needles and tiny tubes called catheters. The images provide road maps that allow the interventional radiologist to 
guide these instruments through the body to the areas containing disease. By minimizing the physical trauma to the 
patient, peripheral interventions can reduce infection rates and recovery time as well as shorten hospital stays. To be 
a trained interventionalist in the United States, an individual typically requires fifteen years of post-high school 
training, of which seven years is spent in residency. 

CT scanning 

CT imaging uses X-rays in conjunction with computing algorithms to 
image the body. In CT, an X-ray generating tube opposite an X-ray 
detector (or detectors) in a ring shaped apparatus rotate around a 
patient producing a computer generated cross-sectional image 
(tomogram). CT is acquired in the axial plane, while coronal and 
sagittal images can be rendered by computer reconstruction. 
Radiocontrast agents are often used with CT for enhanced delineation 
of anatomy. Although radiographs provide higher spatial resolution, 
CT can detect more subtle variations in attenuation of X-rays. CT 
exposes the patient to more ionizing radiation than a radiograph. 

Spiral Multi-detector CT utilizes 8, 16, 64 or more detectors during 
continuous motion of the patient through the radiation beam to obtain 
much finer detail images in a shorter exam time. With rapid 
administration of IV contrast during the CT scan these fine detail 
images can be reconstructed into 3D images of carotid, cerebral and 
coronary arteries, CTA, CT angiography. 

Brain CT Scan image slice 

Radiology 37 

CT scanning has become the test of choice in diagnosing some urgent and emergent conditions such as cerebral 
hemorrhage, pulmonary embolism (clots in the arteries of the lungs), aortic dissection (tearing of the aortic wall), 
appendicitis, diverticulitis, and obstructing kidney stones. Continuing improvements in CT technology including 
faster scanning times and improved resolution have dramatically increased the accuracy and usefulness of CT 
scanning and consequently increased utilization in medical diagnosis. 

The first commercially viable CT scanner was invented by Sir Godfrey Hounsfield at EMI Central Research Labs, 

Great Britain in 1972. EMI owned the distribution rights to The Beatles music and it was their profits which funded 

the research. Sir Hounsfield and Alan McLeod McCormick shared the Nobel Prize for Medicine in 1979 for the 

invention of CT scanning. The first CT scanner in North America was installed at the Mayo Clinic in Rochester, MN 

in 1972. 


Medical ultrasonography uses ultrasound (high-frequency sound waves) to visualize soft tissue structures in the body 
in real time. No ionizing radiation is involved, but the quality of the images obtained using ultrasound is highly 
dependent on the skill of the person (ultrasonographer) performing the exam. Ultrasound is also limited by its 
inability to image through air (lungs, bowel loops) or bone. The use of ultrasound in medical imaging has developed 
mostly within the last 30 years. The first ultrasound images were static and two dimensional (2D), but with 
modern-day ultrasonography 3D reconstructions can be observed in real-time; effectively becoming 4D. Marko is a 

Because ultrasound does not utilize ionizing radiation, unlike radiography, CT scans, and nuclear medicine imaging 
techniques, it is generally considered safer. For this reason, this modality plays a vital role in obstetrical imaging. 
Fetal anatomic development can be thoroughly evaluated allowing early diagnosis of many fetal anomalies. Growth 
can be assessed over time, important in patients with chronic disease or gestation-induced disease, and in multiple 
gestations (twins, triplets etc.). Color-Flow Doppler Ultrasound measures the severity of peripheral vascular disease 
and is used by Cardiology for dynamic evaluation of the heart, heart valves and major vessels. Stenosis of the carotid 
arteries can presage cerebral infarcts (strokes). DVT in the legs can be found via ultrasound before it dislodges and 
travels to the lungs (pulmonary embolism), which can be fatal if left untreated. Ultrasound is useful for image-guided 
interventions like biopsies and drainages such as thoracentesis). Small portable ultrasound devices now replace 
peritoneal lavage in the triage of trauma victims by directly assessing for the presence of hemorrhage in the 
peritoneum and the integrity of the major viscera including the liver, spleen and kidneys. Extensive hemoperitoneum 
(bleeding inside the body cavity) or injury to the major organs may require emergent surgical exploration and repair. 



MRI (Magnetic Resonance Imaging) 

MRI uses strong magnetic fields to align atomic nuclei (usually 
hydrogen protons) within body tissues, then uses a radio signal to 
disturb the axis of rotation of these nuclei and observes the radio 
frequency signal generated as the nuclei return to their baseline states 
plus all surrounding areas. The radio signals are collected by small 
antennae, called coils, placed near the area of interest. An advantage of 
MRI is its ability to produce images in axial, coronal, sagittal and 
multiple oblique planes with equal ease. MRI scans give the best soft 
tissue contrast of all the imaging modalities. With advances in 
scanning speed and spatial resolution, and improvements in computer 
3D algorithms and hardware, MRI has become a tool in 
musculoskeletal radiology and neuroradiology. 

MRI image of human knee with a displaced 

One disadvantage is that the patient has to hold still for long periods of 
time in a noisy, cramped space while the imaging is performed. 

Claustrophobia severe enough to terminate the MRI exam is reported in up to 5% of patients. Recent improvements 
in magnet design including stronger magnetic fields (3 teslas), shortening exam times, wider, shorter magnet bores 
and more open magnet designs, have brought some relief for claustrophobic patients. However, in magnets of equal 
field strength there is often a trade-off between image quality and open design. MRI has great benefit in imaging the 
brain, spine, and musculoskeletal system. The modality is currently contraindicated for patients with pacemakers, 
cochlear implants, some indwelling medication pumps, certain types of cerebral aneurysm clips, metal fragments in 
the eyes and some metallic hardware due to the powerful magnetic fields and strong fluctuating radio signals the 
body is exposed to. Areas of potential advancement include functional imaging, cardiovascular MRI, as well as MR 
image guided therapy. 

Nuclear Medicine 

Nuclear medicine imaging involves the administration into the patient of radiopharmaceuticals consisting of 
substances with affinity for certain body tissues labeled with radioactive tracer. The most commonly used tracers are 
Technetium-99m, Iodine-123, Iodine-131, Gallium-67 and Thallium-201. The heart, lungs, thyroid, liver, 
gallbladder, and bones are commonly evaluated for particular conditions using these techniques. While anatomical 
detail is limited in these studies, nuclear medicine is useful in displaying physiological function. The excretory 
function of the kidneys, iodine concentrating ability of the thyroid, blood flow to heart muscle, etc. can be measured. 
The principal imaging device is the gamma camera which detects the radiation emitted by the tracer in the body and 
displays it as an image. With computer processing, the information can be displayed as axial, coronal and sagittal 
images (SPECT images, single-photon emission computed tomography). In the most modern devices Nuclear 
Medicine images can be fused with a CT scan taken quasi-simultaneously so that the physiological information can 
be overlaid or co-registered with the anatomical structures to improve diagnostic accuracy. 

Positron emission tomography (PET), scanning also falls under "nuclear medicine." In PET scanning, a radioactive, 
biologically active substance, most often Fludeoxyglucose (18F), is injected into a patient and the radiation emitted 
by the patient is detected to produce multi-planar images of the body. Metabolically more active tissues, such as 
cancer, concentrate the active substance more than normal tissues. PET images can be combined (or "fused") with an 
anatomic imaging study (currently generally CT images), to more accurately localize PET findings and thereby 
improve diagnostic accuracy. 

Radiology 39 


Teleradiology is the transmission of radiographic images from one location to another for interpretation by a 
radiologist. It is most often used to allow rapid interpretation of emergency room, ICU and other emergent 
examinations after hours of usual operation, at night and on weekends. In these cases the images are often sent across 
time zones (i.e. to Spain, Australia, India) with the receiving radiologist working his normal daylight hours. 
Teleradiology can also be utilized to obtain consultation with an expert or sub-specialist about a complicated or 
puzzling case. 

Teleradiology requires a sending station, high speed Internet connection and high quality receiving station. At the 
transmission station, plain radiographs are passed through a digitizing machine before transmission, while CT scans, 
MRIs, Ultrasounds and Nuclear Medicine scans can be sent directly as they are already a stream of digital data. The 
computer at the receiving end will need to have a high-quality display screen that has been tested and cleared for 
clinical purposes. The interpreting radiologist then faxes or e-mails the radiology report to the requesting physician. 

The major advantage of teleradiology is the ability to utilize different time zones to provide real-time emergency 
radiology services around-the-clock. The disadvantages include higher costs , limited contact between the ordering 
physician and the radiologist, and the inability to cover for procedures requiring an onsite radiologist. Laws and 
regulations concerning the use of teleradiology vary among the states, with some states requiring a license to practice 
medicine in the state sending the radiologic exam. Some states require the teleradiology report to be preliminary with 
the official report issued by a hospital staff radiologist. 

Radiologist training 
United States 

Radiology is a competitive field in medicine and successful applicants are often near the top of their medical school 
class, with high board scores. The field is rapidly expanding due to advances in computer technology, which is 
closely linked to modern imaging. Diagnostic radiologists must complete at least 13 years of post-high school 
education, including 4 years of prerequisite undergraduate training, 4 years of medical school, and 5 years of 
post-graduate training. The first postgraduate year is usually a transitional year of various rotations, but is sometimes 
a preliminary internship in medicine or surgery. A four-year diagnostic radiology residency follows. The Radiology 
resident must pass a medical physics board exam covering the science and technology of ultrasound, CTs, x-rays, 
nuclear medicine and MRI. Core knowledge of the radiologist includes radiobiology, which is the study of the 
effects of ionizing radiation on living tissue. Near the completion of residency, the radiologist in training is eligible 
to take the written and oral board examinations administered by the American Board of Radiology (ABR). Starting 
in 2010, the ABR's board examination structure will be changed to include two computer-based exams, one given 
after the third year of residency training, and the second given 18 months after the first. 

The Wayne State University School of Medicine and the University of South Carolina School of Medicine both offer 
an integrated radiology curriculum during their respective MD Programs in collaboration with GE Medical led by 
investigators of the Advanced Diagnostic Ultrasound in Microgravity study. 

Following completion of residency training, radiologists either begin their practice or enter into sub-speciality 
training programs known as fellowships. Examples of sub-speciality training in radiology include abdominal 
imaging, thoracic imaging, CT/Ultrasound, MRI, musculoskeletal imaging, interventional radiology, neuroradiology, 
interventional neuroradiology, paediatric radiology, mammography and women's imaging. Fellowship training 


programs in radiology are usually 1 or 2 years in length. 

Radiographic exams are usually performed by radiologic technologists, (also known as diagnostic radiographers) 
who in the United States have a 2-year Associates Degree and the UK a 3 year Honours Degree. 

Radiology 40 

Veterinary radiologists are veterinarians that specialize in the use of X-rays, ultrasound, MRI and nuclear medicine 
for diagnostic imaging or treatment of disease in animals. They are certified in either diagnostic radiology or 
radiation oncology by the American College of Veterinary Radiology. 


After obtaining medical licensure, German radiologists complete a 5-year residency, culminating with a board 
examination (known as Facharztausbildung). 


Until 2008, a Radiology training program had a duration of four years. At present, a radiology training program lasts 
five years. Further training is required for specialization in radiotherapy or nuclear medicine. 

See also 

• X-ray image intensifier (C-Arm), equipment that uses x-rays to produce an image feed displayed on a TV screen 

• Digital Mammography and PACS 

• Interventional radiology, in which minimally invasive procedures are performed using image guidance 

• Medical radiography, the use of ionizing electromagnetic radiation, such as X-rays, in medicine 

• Positron emission tomography, which produces a three-dimensional image 

• Radiobiology, the interdisciplinary science that studies the biological effects of ionizing and non-ionizing 
radiation of the whole electromagnetic spectrum 

• Radiation protection, the science of protecting people and the environment from the harmful effects of ionizing 

• Radiography, the use of X-rays to view unseen or hard-to-image objects 

• Radiosensitivity, the susceptibility of organic tissues to the harmful effect of ionizing radiation 

• Teleradiology, the transmission by electronic means of radiological patient images from one location to another 
for interpretation or consultation 

• Radiology technician 


[1] Novelline, Robert. Squire's Fundamentals of Radiology. Harvard University Press. 5th edition. 1997. ISBN 0-674-83339-2. 

[2] Society of Interventional Radiology — Global Statement Defining Interventional radiology. 

IR_Global_S tatement. pdf 
[3] Herman, G. T., Fundamentals of computerized tomography: Image reconstruction from projection, 2nd edition, Springer, 2009 
[4] Filler, Aaron (2010). "The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, 

and DTI". Internet Journal of Neurosurgery ( 


the-history-development-and-impact-of-computed-imaging-in-neurological-diagnosis-and-neurosurgery-ct-mri-and-dti.html) 7 (1). 
[7] http://www.jultrasoundmed.Org/cgi/content/abstract/27/5/745 



External links 

• Radiology (http://www.dmoz.Org/Health/Medicine/Medical_Specialties/Radiology//) at the Open Directory 

• MedPix medical imaging database ( 

• MedWorm ( Radiology directory 

• MyPACS medical imaging database ( 

• Radiolopolis ( an international Radiology community for education, research and 
clinical practice 

• radRounds Radiology Network ( - Professional and Social Network for Radiology 

• ( international dual source CT experts community for education, research and 
clinical practice 

Bold text 

Molecular imaging 

Molecular imaging originated from 
the field of radiopharmacology due to 
the need to better understand the 
fundamental molecular pathways 
inside organisms in a noninvasive 

Molecular imaging & therapy 

Improved I 
imaging I 






Molecular Imaging emerged in the 

early twenty-first century as a 

discipline at the intersection of 

molecular biology and in vivo imaging. 

It enables the visualisation of the 

cellular function and the follow-up of 

the molecular process in living 

organisms without perturbing them. The multiple and numerous potentialities of this field are applicable to the 

diagnosis of diseases such as cancer, and neurological and cardiovascular diseases. This technique also contributes to 

improving the treatment of these disorders by optimizing the pre-clinical and clinical tests of new medication. They 

are also expected to have a major economic impact due to earlier and more precise diagnosis. Molecular and 

Functional Imaging has taken on a new direction since the description of the human genome. New paths in 

fundamental research, as well as in applied and industrial research, render the task of scientists more complex and 

increase the demands on them. Therefore, a tailor-made teaching program is in order. 

Molecular imaging differs from traditional imaging in that probes known as biomarkers are used to help image 
particular targets or pathways. Biomarkers interact chemically with their surroundings and in turn alter the image 
according to molecular changes occurring within the area of interest. This process is markedly different from 
previous methods of imaging which primarily imaged differences in qualities such as density or water content. This 
ability to image fine molecular changes opens up an incredible number of exciting possibilities for medical 
application, including early detection and treatment of disease and basic pharmaceutical development. Furthermore, 
molecular imaging allows for quantitative tests, imparting a greater degree of objectivity to the study of these areas. 

Molecular imaging 42 

One emerging technology is MALDI molecular imaging based on mass spectrometry. Many areas of research are 
being conducted in the field of molecular imaging. Much research is currently centered around detecting what is 
known as a predisease state or molecular states that occur before typical symptoms of a disease are detected. Other 
important veins of research are the imaging of gene expression and the development of novel biomarkers. 
Organizations such as the SNM Molecular Imaging Center of Excellence (MICoE) have formed to support research 
in this field. In Europe, other "networks of excellence" such as DiMI (Diagnostics in Molecular Imaging) or EMIL 
(European Molecular Imaging Laboratories) work on this new science, integrating activities and research in the field. 
In this way, a European Master Programme "EMMI" is being set up to train a new generation of professionals in 
molecular imaging. 

Recently the term "Molecular Imaging" has been applied to a variety of microscopy and nanoscopy techniques 
including live-cell microscopy, Total Internal Reflection Fluorescence (TIRF)-microscopy, STimulated Emission 
Depletion (STED)-nanoscopy and Atomic Force Microscopy (AFM) as here images of molecules are the readout. 

Imaging modalities 

There are many different modalities that can be used for noninvasive molecular imaging. Each have their different 
strengths and weaknesses and some are more adept at imaging multiple targets than others. 

Magnetic resonance imaging (MRI) 

MRI has the advantages of having very high spatial resolution and is very adept at morphological imaging and 
functional imaging. MRI does have several disadvantages though. First, MRI has a sensitivity of around 10 mol/L 
to 10" mol/L which compared to other types of imaging can be very limiting. This problem stems from the fact that 
the difference between atoms in the high energy state and the low energy state is very small. For example at 1.5 
teslas the difference between high and low energy states is approximately 9 molecules per 2 million. Improvements 
to increase MR sensitivity include hyperpolarization by increasing magnetic field strength, optical pumping, or 
dynamic nuclear polarization. There are also a variety of signal amplification schemes based on chemical exchange 
that increase sensitivity. 

Optical imaging 

There are a number of approaches used for optical imaging. The various methods depend upon fluorescence, 
bioluminescence, absorption or reflectance as the source of contrast. 

Optical imaging's most valuable attribute is that it and ultrasound do not have strong safety concerns like the other 
medical imaging modalities. 

The downside of optical imaging is the lack of penetration depth, especially when working at visible wavelengths. 
Depth of penetration is related to the absorption and scattering of light, which is primarily a function of the 
wavelength of the excitation source. Light is absorbed by endogenous chromophores found in living tissue (e.g. 
hemoglobin, melanin, and lipids). In general, light absorption and scattering decreases with increasing wavelength. 
Below -700 nm (e.g. visible wavelengths), these effects result in shallow penetration depths of only a few 
millimeters. Thus, in the visible region of the spectrum, only superficial assessment of tissue features is possible. 
Above 900 nm, water absorption can interfere with signal-to-background ratio. Because the absorption coefficient of 
tissue is considerably lower in the near infrared (NIR) region (700-900 nm), light can penetrate more deeply, to 
depths of several centimeters. 

Fluorescent probes and labels are an important tool for optical imaging. A number of near-infrared (NIR) 
fluorophores have been employed for in vivo imaging, including Kodak X-SIGHT Dyes and Conjugates, DyLight 
750 and 800 Fluors, Cy 5.5 and 7 Fluors, Alexa Fluor 680 and 750 Dyes, IRDye 680 and 800CW Fluors. Quantum 
dots, with their photostability and bright emissions, have generated a great deal of interest; however, their size 

Molecular imaging 


precludes efficient clearance from the circulatory and renal systems while exhibiting long-term toxicity. 
Several studies have demonstrated the use of infrared dye-labeled probes in optical imaging. 

1 . In a comparison of gamma scintigraphy and NIR imaging, a cyclopentapeptide dual-labeled with 1 1 1 indium and 

an NIR fluorophore was used to image av|33-integrin positive melanoma xenografts 


2. Near-infrared labeled RGD targeting av|33-integrin has been used in numerous studies to target a variety of 



3. An NIR fluorophore has been conjugated to epidermal growth factor (EGF) for imaging of tumor progression 

4. An NIR fluorophore was compared to Cy5.5, suggesting that longer-wavelength dyes may produce more 
effective targeting agents for optical imaging 

5. Pamidronate has been labeled with an NIR fluorophore and used as a bone imaging agent to detect osteoblastic 




activity in a living animal 

6. An NIR fluorophore-labeled GPI, a potent inhibitor of PSMA (prostate specific membrane antigen) 

7. Use of human serum albumin labeled with an NIR fluorophore as a tracking agent for mapping of sentinel lymph 



8. 2-Deoxy-D-glucose labeled with an NIR fluorophore 


Single photon emission computed tomography (SPECT) 

The main purpose of SPECT when used in brain imaging is to measure the regional 
cerebral blood flow (rCBF). The development of computed tomography in the 1970s 
allowed mapping of the distribution of the radioisotopes in the brain, and led to the 
technique now called SPECT. 

The imaging agent used in SPECT emits gamma rays, as opposed to the positron 
emitters (such as F) used in PET. There are a range of radiotracers (such as m Tc, 

111 123 201 

In, I, Tl) that can be used, depending on the specific application. 

Xenon ( Xe) gas is one such radiotracer. It has been shown to be valuable for 

diagnostic inhalation studies for the evaluation of pulmonary function; for imaging the 

lungs; and may also be used to assess rCBF. Detection of this gas occurs via a gamma 

camera — which is a scintillation detector consisting of a collimator, a Nal crystal, and a 

set of photomultiplier tubes. 

By rotating the gamma camera around the head, a three dimensional image of the 
distribution of the radiotracer can be obtained by employing filtered back projection or 
other tomographic techniques. The radioisotopes used in SPECT have relatively long 
half lives (a few hours to a few days) making them easy to produce and relatively cheap. 
This represents the major advantage of SPECT as a brain imaging technique, since it is 
significantly cheaper than either PET or fMRI. However it lacks good spatial (i.e., 
where exactly the particle is) or temporal (i.e., did the contrast agent signal happen at 
this millisecond, or that millisecond) resolution. Additionally, due to the radioactivity of 
the contrast agent, there are safety aspects concerning the administration of 
radioisotopes to the subject, especially for serial studies. 

SPECT image (bone tracer) 
of a mouse MIP 

Positron emission tomography (PET) 

Positron emission tomography is a nuclear medicine imaging technique which produces a three-dimensional image 
or picture of functional processes in the body. The theory behind PET is simple enough. First a molecule is tagged 

Molecular imaging 44 

with a positron emitting isotope. These positrons annihilate with nearby electrons, emitting two 51 1,000 eV photons, 
directed 180 degrees apart in opposite directions. These photons are then detected by the scanner which can estimate 
the density of positron annihilations in a specific area. When enough interactions and annihilations have occurred, 
the density of the original molecule may be measured in that area. Typical isotopes include C, N, O, F, Cu, 
Cu, I, Br, Rb and Ga, with F being the most clinically utilized. One of the major disadvantages of PET 
is that most of the probes must be made with a cyclotron. Most of these probes also have a half life measured in 
hours, forcing the cyclotron to be on site. These factors can make PET prohibitively expensive. PET imaging does 
have many advantages though. First and foremost is its sensitivity: a typical PET scanner can detect between 10~ 


mol/L to 10 mol/L concentrations. 


Explanation of how it works basics how it would be used in imaging multiple targets. 

Probes and the imaging of molecular interactions 

In order to image multiple targets you must first identify and develop the interactions of which you are trying to take 
advantage. Developing good probes is often difficult and is an area of intense research. 

See also 

• Predictive medicine 

• Translational medicine 

• Chemical imaging 

• EMMI European Master in Molecular Imaging 


[1] Weissleder, R., Mahmood, U., Molecular Imaging, Radiology 2001; 219:316-333. Download PDF ( 

[2] Olive D.M., Kovar, J.L., Simpson, M.A., Schutz-Geschwender, A., A systematic approach to the development of fluorescent contrast agents 

for optical imaging of mouse cancer models, Analytical Biochemistry 2007;(367), #1, 1-12. Download PDF ( 

[3] Houston, J.P., Ke, S. Wang, W., Li, C, and Sevick-Muraca, E.M., J. Biomed. Optics 10, 054010 (2005) 
[4] Chen, K, Xie, J., Chen, X., Molecular Imaging, Vol 8, No 2 (March-April, 2009): pp 65-73. ( 

volume 08, 2009/issue 02, April/MI_2009_0001 l/MI_2009_00011.pdf) I Download PDF] 
[5] Kovar, J.L., Johnson, M.A., Volcheck, W.M., Chen, J., and Simpson, M.A., Am. J. Pathol. 169, 1415 (2006). ( 

content/abstract/169/4/1415) I Download PDF] 
[6] Adams, K.E., Ke, S., Kwan, S., Liang, F., Fan, Z., Lu, Y., Barry, M.A., Mawad, M.E., and Sevick-Muraca, E.M., Journal of Biomedical 

Optics 12, 024017 (2007). 
[7] Zaheer, A., Lenkinski, R.E., Mahmood, A., Jones, A.G., Cantley, L.C., Frangioni, J.V., Nat. Biotechnol. 19, 1148 (2001). 
[8] Humblet, V., Lapidus, R., Williams, L.R., Tsukamoto, T., Rojas, C, Majer, P., Hin, B., Ohnishi, S., De Grand, A.M., Zaheer, A., Renze, J.T., 

Nakayama, A., Slusher, B.S., and Frangioni, J.V.. Molecular Imaging 4, 448 (2005). 
[9] Ohnishi, S., Lomnes, S.J., Laurence, R.G., Gogbashian, A., Mariani, G., and Frangioni, J.V., Molecular Imaging 4, 172 (2005). 
[10] Kovar, J., Volcheck, W., Sevick-Muraca, E., Simpson, M.A., and Olive, D.M., Analytical Biochemistry, Vol. 384 (2009) 254-262 (http:// pdf) I Download PDF] 

Molecular imaging 45 

Further reading 

1. Fuchs V.R.,Sox H.C. Jr., Health Affairs 2001: 20(5), 30-42 

2. Weissleder R, Mahmood U. Molecular imaging, Radiology 2001: 219:316-333 

3. Piwnica-worms D, Luker KE. Imaging Protein-protein interactions in whole cells and living animals. Ernst 
Schering Res Found Workshop. 2005;(49):35-41. 

4. Massoud TF, Gambhir SS. Molecular imaging in living subjects: seeing fundamental biological processes in a 
new light, Genes & Development 2003: 545-580 

External links 

• European Master in Molecular Imaging ( 

• Society for Molecular Imaging ( 

• Understanding Molecular Imaging and biomarker usage ( 

• Annual Imaging Meeting ( - Annual conference on Imaging in clinical and 
pre-clinical imaging 

• Molecular Imaging Research update ( imaging) from 

• Molecular Imaging Center ( 

• Understanding Near-Infrared Imaging ( 
pearl_sensitivity.jsp/) - Resource to better understand the benefits of Near-Infrared imaging. 

• Society for Neuroscience ( - 

• European Society for Molecular Imaging ( - 

• European Institute for Biomedical Imaging Research ( - 




Tomography is imaging by sections or sectioning, through the use 
of any kind of penetrating wave. A device used in tomography is 
called a tomograph, while the image produced is a tomogram. 
The method is used in radiology, archaeology, biology, 
geophysics, oceanography, materials science, astrophysics and 
other sciences. In most cases it is based on the mathematical 
procedure called tomographic reconstruction. The word was 
derived from the Greek word tomos which means "part" or 
"section", representing the idea of "a section", "a slice" or "a 
cutting". A tomography of several sections of the body is known 
as a poly tomography. 


The word "tomography" is derived from the Greek tomos (part) 
and graphein (to write). 


Basic principle of tomography: superposition free 

ross sections S and S c 

the projected image P 

tomographic cross sections S and S compared with 

In conventional medical X-ray tomography, clinical staff make a 

sectional image through a body by moving an X-ray source and 

the film in opposite directions during the exposure. Consequently, structures in the focal plane appear sharper, while 

structures in other planes appear blurred. By modifying the direction and extent of the movement, operators can 

select different focal planes which contain the structures of interest. Before the advent of more modern 

computer-assisted techniques, this technique, ideated in the 1930s by the radiologist Alessandro Vallebona, proved 

useful in reducing the problem of superimposition of structures in projectional (shadow) radiography. 

Modern tomography 

More modern variations of tomography involve gathering projection data from multiple directions and feeding the 
data into a tomographic reconstruction software algorithm processed by a computer. Different types of signal 
acquisition can be used in similar calculation algorithms in order to create a tomographic image. With current 2005 
technology, tomograms are derived using several different physical phenomena listed in the following table. 

Physical phenomenon 

Type of tomogram 



gamma rays 


radio- frequency waves 


electron-positron annihilation 



Electron tomography or 3D TEM 


atom probe 

Some recent advances rely on using simultaneously integrated physical phenomena, e.g. X-rays for both CT and 
angiography, combined CT/MRI and combined CT/PET. 



The term volume imaging might subsume these technologies more accurately than the term tomography. However, in 
the majority of cases in clinical routine, staff request output from these procedures as 2-D slice images. As more and 
more clinical decisions come to depend on more advanced volume visualization techniques, the terms 
tomography /tomogram may go out of fashion. 

Many different reconstruction algorithms exist. Most algorithms fall into one of two categories: filtered back 
projection (FBP) and iterative reconstruction (IR). These procedures give inexact results: they represent a 
compromise between accuracy and computation time required. FBP demands fewer computational resources, while 
IR generally produces fewer artifacts (errors in the reconstruction) at a higher computing cost. 

Although MRI and ultrasound make cross sectional images they don't acquire data from different directions. In MRI 
spatial information is obtained by using magnetic fields. In ultrasound, spatial information is obtained simply by 
focusing and aiming a pulsed ultrasound beam. 

Synchrotron X-ray tomographic microscopy 

Recently a new technique called synchrotron X-ray tomographic microscopy (SRXTM) allows for detailed three 
dimensional scanning of fossils. 

Types of tomography 


Source of data 


Year of introduction 

Atom probe tomography 

Atom probe 


Confocal microscopy (Laser scanning confocal microscopy) 

Laser scanning confocal 


Cryo-electron tomography 

Cryo-electron microscopy 


Electrical capacitance tomography 

Electrical capacitance 


Electrical resistivity tomography 

Electrical resistivity 


Electrical impedance tomography 

Electrical impedance 



Functional magnetic resonance imaging 

Magnetic resonance 



Magnetic induction tomography 

Magnetic induction 


Magnetic resonance imaging or nuclear magnetic resonance 

Nuclear magnetic moment 


Neutron tomography 


Ocean acoustic tomography 


Optical coherence tomography 



Optical projection tomography 

Optical microscope 


Photoacoustic imaging in biomedicine 

Photoacoustic spectroscopy 


Positron emission tomography 

Positron emission 


Positron emission tomography - computed tomography 

Positron emission & X-ray 


Quantum tomography 

Quantum state 

Single photon emission computed tomography 

Gamma ray 


Seismic tomography 

Ground-penetrating radar 

Thermoacoustic imaging 

Photoacoustic spectroscopy 


Ultrasound-modulated optical tomography 



Tomography 48 

Ultrasound transmission tomography Ultrasound 

X-ray tomography X-ray 

Zeeman-Doppler imaging Zeeman effect 

CT, CATScan 1971 

Discrete tomography and Process tomography refer to processing techniques. 

See also 

Chemical imaging 

Geophysical imaging 

Medical imaging 

MRI compared with CT 

Network tomography 

Nonogram, a type of puzzle based on a discrete model of tomography 

Radon transform 

Tomographic reconstruction 


[1] MeSH Tomography (http://www.nlm. 

[2] Herman, G. T., Fundamentals of computerized tomography: Image reconstruction from projection, 2nd edition, Springer, 2009 

External links 

• International Journal of Tomography & Statistics (UTS) ( 

• Microtomography/Synchrotron tomography ( 

X-ray computed tomography 


X-ray computed tomography 

Computed tomography (CT) is a medical 
imaging method employing tomography 
created by computer processing. Digital 
geometry processing is used to generate a 
three-dimensional image of the inside of an 
object from a large series of 
two-dimensional X-ray images taken around 

a single axis of rotation 


A patient is receiving a CT scan for cancer. Outside of the scanning room is an 
imaging computer that reveals a 3D image of the body's interior. 

CT produces a volume of data which can be 

manipulated, through a process known as 

"windowing", in order to demonstrate 

various bodily structures based on their 

ability to block the X-ray beam. Although 

historically the images generated were in the 

axial or transverse plane, orthogonal to the long axis of the body, modern scanners allow this volume of data to be 

reformatted in various planes or even as volumetric (3D) representations of structures. Although most common in 

medicine, CT is also used in other fields, such as nondestructive materials testing. Another example is archaeological 

uses such as imaging the contents of sarcophagi or the DigiMorph project at the University of Texas at Austin which 

uses a CT scanner to study biological and paleontological specimens. 


Usage of CT has increased dramatically over the last two decades 
in the United States in 2007. [4] 

An estimated 72 million scans were performed 


The word "tomography" is derived from the Greek tomos (slice) and graphein (to write). Computed tomography was 
originally known as the "EMI scan" as it was developed at a research branch of EMI, a company best known today 
for its music and recording business. It was later known as computed axial tomography (CAT or CT scan) and 
body section rontgenography. 

Although the term "computed tomography" could be used to describe positron emission tomography and single 
photon emission computed tomography, in practice it usually refers to the computation of tomography from X-ray 
images, especially in older medical literature and smaller medical facilities. 

In MeSH, "computed axial tomography" was used from 1977—79, but the current indexing explicitly includes 
"X-ray" in the title 


X-ray computed tomography 



In the early 1900s, the Italian radiologist Alessandro Vallebona 
proposed a method to represent a single slice of the body on the 
radiographic film. This method was known as tomography. The idea is 
based on simple principles of projective geometry: moving 
synchronously and in opposite directions the X-ray tube and the film, 
which are connected together by a rod whose pivot point is the focus; 
the image created by the points on the focal plane appears sharper, 
while the images of the other points annihilate as noise. This is only 
marginally effective, as blurring occurs only in the "x" plane. There are 
also more complex devices which can move in more than one plane 
and perform more effective blurring. 

Tomography had been one of the pillars of radiologic diagnostics until 
the late 1970s, when the availability of minicomputers and of the 
transverse axial scanning method, this last due to the work of Godfrey 
Hounsfield and South African-born Allan McLeod Cormack, gradually 
supplanted it as the modality of CT. Mathematically, the method is 
based upon the use of the Radon Transform invented by Johann Radon 
in 1917. But as Cormack remembered later , he had to find the 
solution himself since it was only in 1972, that he learned of the work 
of Radon, by chance. 

The prototype CT scanner 

A historic EMI-Scanner 

The first commercially viable CT scanner was invented by Sir Godfrey 

Hounsfield in Hayes, United Kingdom at EMI Central Research Laboratories using X-rays. Hounsfield conceived 

his idea in 1967. The first EMI-Scanner was installed in Atkinson Morley Hospital in Wimbledon, England, and 


the first patient brain-scan was done on 1 October 1971 . It was publicly announced in 1972. 

The original 1971 prototype took 160 parallel readings through 180 angles, each 1° apart, with each scan taking a 
little over 5 minutes. The images from these scans took 2.5 hours to be processed by algebraic reconstruction 
techniques on a large computer. The scanner had a single photomultiplier detector, and operated on the 

_ ro] 

Translate/Rotate principle. 

It has been claimed that thanks to the success of The Beatles, EMI could fund research and build early models for 
medical use. The first production X-ray CT machine (in fact called the "EMI-Scanner") was limited to making 
tomographic sections of the brain, but acquired the image data in about 4 minutes (scanning two adjacent slices), and 
the computation time (using a Data General Nova minicomputer) was about 7 minutes per picture. This scanner 
required the use of a water-filled Perspex tank with a pre-shaped rubber "head-cap" at the front, which enclosed the 
patient's head. The water-tank was used to reduce the dynamic range of the radiation reaching the detectors (between 
scanning outside the head compared with scanning through the bone of the skull). The images were relatively low 
resolution, being composed of a matrix of only 80 x 80 pixels. 

In the U.S., the first installation was at the Mayo Clinic. As a tribute to the impact of this system on medical imaging 
the Mayo Clinic has an EMI scanner on display in the Radiology Department. Allan McLeod Cormack of Tufts 
University in Massachusetts independently invented a similar process, and both Hounsfield and Cormack shared the 
1979 Nobel Prize in Medicine. [10] 

The first CT system that could make images of any part of the body and did not require the "water tank" was the 
ACTA (Automatic Computerized Transverse Axial) scanner designed by Robert S. Ledley, DDS, at Georgetown 
University. This machine had 30 photomultiplier tubes as detectors and completed a scan in only 9 translate/rotate 
cycles, much faster than the EMI-scanner. It used a DEC PDP11/34 minicomputer both to operate the 

X-ray computed tomography 5 1 

servo-mechanisms and to acquire and process the images. The Pfizer drug company acquired the prototype from the 
university, along with rights to manufacture it. Pfizer then began making copies of the prototype, calling it the 
"200FS" (FS meaning Fast Scan), which were selling as fast as they could make them. This unit produced images in 
a 256x256 matrix, with much better definition than the EMI-Scanner's 80x80. 

Previous studies 

A form of tomography can be performed by moving the X-ray source and detector during an exposure. Anatomy at 
the target level remains sharp, while structures at different levels are blurred. By varying the extent and path of 
motion, a variety of effects can be obtained, with variable depth of field and different degrees of blurring of "out of 
plane" structures. 

Although largely obsolete, conventional tomography is still used in specific situations such as dental imaging 
(orthopantomography) or in intravenous urography. 


Digital tomosynthesis combines digital image capture and processing with simple tube/detector motion as used in 
conventional radiographic tomography. Although there are some similarities to CT, it is a separate technique. In CT, 
the source/detector makes a complete 360-degree rotation about the subject obtaining a complete set of data from 
which images may be reconstructed. In digital tomosynthesis, only a small rotation angle (e.g., 40 degrees) with a 
small number of discrete exposures (e.g., 10) are used. This incomplete set of data can be digitally processed to yield 
images similar to conventional tomography with a limited depth of field. However, because the image processing is 
digital, a series of slices at different depths and with different thicknesses can be reconstructed from the same 
acquisition, saving both time and radiation exposure. 

Because the data acquired is incomplete, tomosynthesis is unable to offer the extremely narrow slice widths that CT 
offers. However, higher resolution detectors can be used, allowing very-high in-plane resolution, even if the Z-axis 
resolution is poor. The primary interest in tomosynthesis is in breast imaging, as an extension to mammography, 
where it may offer better detection rates with little extra increase in radiation exposure. 

Reconstruction algorithms for tomosynthesis are significantly different from conventional CT, because the 
conventional filtered back projection algorithm requires a complete set of data. Iterative algorithms based upon 
expectation maximization are most commonly used, but are extremely computationally intensive. Some 
manufacturers have produced practical systems using off-the-shelf GPUs to perform the reconstruction. 

Diagnostic use 

Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement X-rays and 
medical ultrasonography. It has more recently been used for preventive medicine or screening for disease, for 
example CT colonography for patients with a high risk of colon cancer, or full-motion heart scans for patients with 
high risk of heart disease. A number of institutions offer full-body scans for the general population. However, this is 
a controversial practice, given its lack of proven benefit, cost, radiation exposure, and the risk of finding 'incidental' 
abnormalities that may trigger additional investigations. 

X-ray computed tomography 



CT scanning of the head is typically used to detect infarction, tumours, calcifications, haemorrhage and bone trauma. 
Of the above, hypodense (dark) structures indicate infarction or tumours, hyperdense (bright) structures indicate 
calcifications and haemorrhage and bone trauma can be seen as disjunction in bone windows. 


CT can be used for detecting both acute and chronic changes in the lung parenchyma, that is, the internals of the 
lungs. It is particularly relevant here because normal two dimensional x-rays do not show such defects. A variety of 
different techniques are used depending on the suspected abnormality. For evaluation of chronic interstitial processes 
(emphysema, fibrosis, and so forth), thin sections with high spatial frequency reconstructions are used — often scans 
are performed both in inspiration and expiration. This special technique is called High Resolution CT (HRCT). 
HRCT is normally done with thin section with skipped areas between the thin sections. Therefore it produces a 
sampling of the lung and not continuous images. Continuous images are provided in a standard CT of the chest. 

For detection of airspace disease (such as pneumonia) or cancer, relatively thick sections and general purpose image 
reconstruction techniques may be adequate. IV contrast may also be used as it clarifies the anatomy and boundaries 
of the great vessels and improves assessment of the mediastinum and hilar regions for lymphadenopathy; this is 
particularly important for accurate assessment of cancer. 

CT angiography of the chest is also becoming the primary method for detecting pulmonary embolism (PE) and aortic 
dissection, and requires accurately timed rapid injections of contrast (Bolus Tracking) and high-speed helical 
scanners. CT is the standard method of evaluating abnormalities seen on chest X-ray and of following findings of 
uncertain acute significance. Cardiac CTA is now being used to diagnose coronary artery disease. 

According to the 2007 New England Journal of Medicine study, 19.2 million (31%) of the 62 million CTs done 
every year are for lung CTs. 

Pulmonary angiogram 

CT pulmonary angiogram (CTPA) is a medical diagnostic test used to 
diagnose pulmonary embolism (PE). It employs computed tomography 
to obtain an image of the pulmonary arteries. 

It is a preferred choice of imaging in the diagnosis of PE due to its 
minimally invasive nature for the patient, whose only requirement for 
the scan is a cannula (usually a 20G). 

MDCT (multi detector CT) scanners give the optimum resolution and 
image quality for this test. Images are usually taken on a 0.625 mm 
slice thickness, although 2 mm is sufficient. 50—100 mis of contrast is 
given to the patient at a rate of 4 ml/s. The tracker/locator is placed at 
the level of the pulmonary arteries, which sit roughly at the level of the 
carina. Images are acquired with the maximum intensity of 
radio-opaque contrast in the pulmonary arteries. This is done using 
bolus tracking. 

CT machines are now so sophisticated that the test can be done with a patient visit of 5 minutes with an approximate 
scan time of only 5 seconds or less. 

A normal CTPA scan will show the contrast filling the pulmonary vessels, looking bright white. Ideally the aorta 
should be empty of contrast, to reduce any partial volume artifact which may result in a false positive. Any mass 
filling defects, such as an embolus, will appear dark in place of the contrast, filling / blocking the space where blood 

Example of a CTPA, demonstrating a saddle 

embolus (dark horizontal line) occluding the 

pulmonary arteries (bright white triangle) 

X-ray computed tomography 53 

should be flowing into the lungs. 


With the advent of subsecond rotation combined with multi-slice CT (up to 320-slices), high resolution and high 
speed can be obtained at the same time, allowing excellent imaging of the coronary arteries (cardiac CT 
angiography). Images with an even higher temporal resolution can be formed using retrospective ECG gating. In this 
technique, each portion of the heart is imaged more than once while an ECG trace is recorded. The ECG is then used 
to correlate the CT data with their corresponding phases of cardiac contraction. Once this correlation is complete, all 
data that were recorded while the heart was in motion (systole) can be ignored and images can be made from the 
remaining data that happened to be acquired while the heart was at rest (diastole). In this way, individual frames in a 
cardiac CT investigation have a better temporal resolution than the shortest tube rotation time. 

Because the heart is effectively imaged more than once (as described above), cardiac CT angiography results in a 
relatively high radiation exposure around 12 mSv. Currently, newer acquisition protocols have been developed 
drastically reducing the xRays radiation exposure, down to 1 milliSievert (cfr. Pavone, Fioranelli, Dowe: Computed 

Tomography or Coronary Arteries, Springer 2009). For the sake of comparison, a chest X-ray carries a dose of 

approximately 0.02 to 0.2 mSv and natural background radiation exposure is around 0.01 mSv/day. Thus, cardiac 

CTA is equivalent to approximately 100-600 chest X-rays or over 3 years worth of natural background radiation. 

Methods are available to decrease this exposure, however, such as prospectively decreasing radiation output based 

on the concurrently acquired ECG (aka tube current modulation.) This can result in a significant decrease in 

radiation exposure, at the risk of compromising image quality if there is any arrhythmia during the acquisition. The 

significance of radiation doses in the diagnostic imaging range has not been proven, although the possibility of 

inducing an increased cancer risk across a population is a source of significant concern. This potential risk must be 

weighed against the competing risk of not performing a test and potentially not diagnosing a significant health 

problem such as coronary artery disease. 

It is uncertain whether this modality will replace invasive coronary catheterization. Currently, it appears that the 
greatest utility of cardiac CT lies in ruling out coronary artery disease rather than ruling it in. This is because the test 
has a high sensitivity (greater than 90%) and thus a negative test result means that a patient is very unlikely to have 
coronary artery disease and can be worked up for other causes of their chest symptoms. This is termed a high 
negative predictive value. A positive result is less conclusive and often will be confirmed (and possibly treated) with 
subsequent invasive angiography. The positive predictive value of cardiac CTA is estimated at approximately 82% 
and the negative predictive value is around 93%. 

Dual Source CT scanners, introduced in 2005, allow higher temporal resolution by acquiring a full CT slice in only 
half a rotation, thus reducing motion blurring at high heart rates and potentially allowing for shorter breath-hold 
time. This is particularly useful for ill patients who have difficulty holding their breath or who are unable to take 
heart-rate lowering medication. 

The speed advantages of 64-slice MSCT have rapidly established it as the minimum standard for newly installed CT 
scanners intended for cardiac scanning. Manufacturers have developed 320-slice and true 'volumetric' scanners, 
primarily for their improved cardiac scanning performance. 

The latest MSCT scanners acquire images only at 70-80% of the R-R interval (late diastole). This prospective gating 
can reduce effective dose from 10-15mSv to as little as 1.2mSv in follow-up patients acquiring at 75% of the R-R 
interval. Effective doses at a centre with well trained staff doing coronary imaging can average less than the doses 
for conventional coronary angiography. 

CT Scan of 1 1 cm Wilms' tumor of right kidney 
in 13 month old patient. 

X-ray computed tomography 54 

Abdominal and pelvic 

CT is a sensitive method for diagnosis of abdominal diseases. It is used 
frequently to determine stage of cancer and to follow progress. It is 
also a useful test to investigate acute abdominal pain (especially of the 
lower quadrants, whereas ultrasound is the preferred first line 
investigation for right upper quadrant pain). Renal stones, appendicitis, 
pancreatitis, diverticulitis, abdominal aortic aneurysm, and bowel 
obstruction are conditions that are readily diagnosed and assessed with 
CT. CT is also the first line for detecting solid organ injury after 

Multidetector CT (MDCT) can clearly delineate anatomic structures in 
the abdomen, which is critical in the diagnosis of internal 
diaphragmatic and other nonpalpable or unsuspected hernias. MDCT 
also offers clear detail of the abdominal wall allowing wall hernias to 
be identified accurately. 

Oral and/or rectal contrast may be used depending on the indications for the scan. A dilute (2% w/v) suspension of 
barium sulfate is most commonly used. The concentrated barium sulfate preparations used for fluoroscopy e.g. 
barium enema are too dense and cause severe artifacts on CT. Iodinated contrast agents may be used if barium is 
contraindicated (for example, suspicion of bowel injury). Other agents may be required to optimize the imaging of 
specific organs, such as rectally administered gas (air or carbon dioxide) or fluid (water) for a colon study, or oral 
water for a stomach study. 

CT has limited application in the evaluation of the pelvis. For the female pelvis in particular, ultrasound and MRI are 
the imaging modalities of choice. Nevertheless, it may be part of abdominal scanning (e.g. for tumors), and has uses 
in assessing fractures. 

CT is also used in osteoporosis studies and research alongside dual energy X-ray absorptiometry (DXA). Both CT 
and DXA can be used to assess bone mineral density (BMD) which is used to indicate bone strength, however CT 
results do not correlate exactly with DXA (the gold standard of BMD measurement). CT is far more expensive, and 
subjects patients to much higher levels of ionizing radiation, so it is used infrequently. 


CT is often used to image complex fractures, especially ones around joints, because of its ability to reconstruct the 
area of interest in multiple planes. Fractures, ligamentous injuries and dislocations can easily be recognised with a 
0.2 mm resolution. 

Advantages and disadvantages 
Advantages over traditional radiography 

There are several advantages that CT has over traditional 2D medical radiography. First, CT completely eliminates 
the superimposition of images of structures outside the area of interest. Second, because of the inherent high-contrast 
resolution of CT, differences between tissues that differ in physical density by less than 1% can be distinguished. 
Finally, data from a single CT imaging procedure consisting of either multiple contiguous or one helical scan can be 
viewed as images in the axial, coronal, or sagittal planes, depending on the diagnostic task. This is referred to as 
multiplanar reformatted imaging. 

CT is regarded as a moderate to high radiation diagnostic technique. While technical advances have improved 
radiation efficiency, there has been simultaneous pressure to obtain higher-resolution imaging and use more complex 

X-ray computed tomography 55 

scan techniques, both of which require higher doses of radiation. The improved resolution of CT has permitted the 
development of new investigations, which may have advantages; compared to conventional angiography for 
example, CT angiography avoids the invasive insertion of an arterial catheter and guidewire; CT colonography (also 
known as virtual colonoscopy or VC for short) may be as useful as a barium enema for detection of tumors, but may 
use a lower radiation dose. CT VC is increasingly being used in the UK as a diagnostic test for bowel cancer and can 
negate the need for a colonoscopy. 

The greatly increased availability of CT, together with its value for an increasing number of conditions, has been 
responsible for a large rise in popularity. So large has been this rise that, in the most recent comprehensive survey in 
the United Kingdom, CT scans constituted 7% of all radiologic examinations, but contributed 47% of the total 
collective dose from medical X-ray examinations in 2000/2001. Increased CT usage has led to an overall rise in 
the total amount of medical radiation used, despite reductions in other areas. In the United States and Japan for 
example, there were 26 and 64 CT scanners per 1 million population in 1996. In the U.S., there were about 3 million 


CT scans performed in 1980, compared to an estimated 62 million scans in 2006. 

The radiation dose for a particular study depends on multiple factors: volume scanned, patient build, number and 

type of scan sequences, and desired resolution and image quality. Additionally, two helical CT scanning parameters 

n si 
that can be adjusted easily and that have a profound effect on radiation dose are tube current and pitch. 

Computed tomography (CT) scan has been shown to be more accurate than radiographs in evaluating anterior 


interbody fusion but may still over-read the extent of fusion. 

Safety concerns 

The increased use of CT scans has been the greatest in two fields: screening of adults (screening CT of the lung in 
smokers, virtual colonoscopy, CT cardiac screening and whole-body CT in asymptomatic patients) and CT imaging 
of children. Shortening of the scanning time to around 1 second, eliminating the strict need for subject to remain still 

or be sedated, is one of the main reasons for large increase in the pediatric population (especially for the diagnosis of 

appendicitis). CT scans of children have been estimated to produce non-negligible increases in the probability of 

lifetime cancer mortality, leading to calls for the use of reduced current settings for CT scans of children. These 

calculations are based on the assumption of a linear relationship between radiation dose and cancer risk; this claim is 


controversial, as some but not all evidence shows that smaller radiation doses are not harmful. Estimated lifetime 
cancer mortality risks attributable to the radiation exposure from a CT in a 1 -year-old are 0.18% (abdominal) and 
0.07% (head) — an order of magnitude higher than for adults — although those figures still represent a small increase 
in cancer mortality over the background rate. In the United States, of approximately 600,000 abdominal and head CT 

examinations annually performed in children under the age of 15 years, a rough estimate is that 500 of these 

individuals might ultimately die from cancer attributable to the CT radiation. The additional risk is still very low 

(0.35%) compared to the background risk of dying from cancer (23%). However, if these statistics are 

extrapolated to the current number of CT scans, the additional rise in cancer mortality could be 1.5 to 2%. 

Furthermore, certain conditions can require children to be exposed to multiple CT scans. Again, these calculations 

can be problematic because the assumptions underlying them could overestimate the risk. 

In 2009 a number of studies appeared that further defined the risk of cancer that may be caused by CT scans. One 

study indicated that radiation by CT scans is often higher and more variable than cited and each of the 19,500 CT 

scans that are daily performed in the US is equivalent to 30 to 442 chest x-rays in radiation. It has been estimated 

that CT radiation exposure will result in 29,000 new cancer cases just from the CT scans performed in 2007. The 

most common cancers caused by CT are thought to be lung cancer, colon cancer and leukemia with younger people 

and women more at risk. These conclusions, however, are criticized by the American College of Radiology (ACR) 

that maintains that the life expectancy of CT scanned patients is not that of the general population and that the model 

of calculating cancer is based on total body radiation exposure and thus faulty. 

X-ray computed tomography 


CT scans can be performed with different settings for lower exposure in children, although these techniques are often 
not employed. Surveys have suggested that currently, many CT scans are performed unnecessarily. Ultrasound 
scanning or magnetic resonance imaging are alternatives (for example, in appendicitis or brain imaging) without the 
risk of radiation exposure. Although CT scans come with an additional risk of cancer (it can be estimated that the 
radiation exposure from a full body scan is the same as standing 2.4 km away from the WWII atomic bomb blasts in 

[231 [241 [211 

Japan ), especially in children, the benefits that stem from their use outweighs the risk in many cases. 

Studies support informing parents of the risks of pediatric CT scanning 


Typical scan doses 


Typical effective dose (mSv) 


Chest X-ray 



Head CT 



Screening mammography 

3 [17] 


Abdomen CT 



Chest CT 



CT colonography (virtual colonoscopy) 



Chest, abdomen and pelvis CT 



Cardiac CT angiogram 



Barium enema 

15 [1?] 


Neonatal abdominal CT 

20 [1?] 


For purposes of comparison, the average background exposure in the UK is 1-3 mSv per year. 

Adverse reactions to contrast agents 

Because contrast CT scans rely on intravenously administered contrast agents in order to provide superior image 
quality, there is a low but non-negligible level of risk associated with the contrast agents themselves. Many patients 
report nausea and discomfort, including warmth in the crotch which mimics the sensation of wetting oneself. Certain 
patients may experience severe and potentially life-threatening allergic reactions to the contrast dye. 

The contrast agent may also induce kidney damage. The risk of this is increased with patients who have preexisting 
renal insufficiency, preexisting diabetes, or reduced intravascular volume. In general, if a patient has normal kidney 
function, then the risks of contrast nephropathy are negligible. Patients with mild kidney impairment are usually 
advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use 
of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT, e.g., MRI. 
Paradoxically, patients with severe renal failure requiring dialysis do not require special precautions, as their kidneys 
have so little function remaining that any further damage would not be noticeable and the dialysis will remove the 
contrast agent. 

X-ray computed tomography 57 

Low-dose CT scan 

An important issue within radiology today is how to reduce the radiation dose during CT examinations without 
compromising the image quality. Generally, higher radiation doses result in higher-resolution images, while lower 
doses lead to increased image noise and unsharp images. Increased dosage raises the risk of radiation induced cancer 
— a four-phase abdominal CT gives the same radiation dose as 300 chest x-rays. Several methods exist which can 
reduce the exposure to ionizing radiation during a CT scan. 

1 . New software technology can significantly reduce the required radiation dose. The software works as a filter that 
reduces random noise and enhances structures. In this way, it is possible to get high-quality images and at the 
same time lower the dose by as much as 30 to 70 percent. 

2. Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different 
body types and organs require different amounts of radiation. 

3. Prior to every CT examination, evaluate the appropriateness of the exam whether it is motivated or if another type 
of examination is more suitable. Higher resolution is not always suitable for any given scenario, such as detection 
of small pulmonary masses 

Computed tomography versus MRI 

The basic mathematics of the 2D-Fourier transform in CT reconstruction is very similar to the 2D-FT NMRI, but the 
computer data processing in CT does differ in detail, as for example in the case of the volume rendering and artifact 
elimination algorithms that are specific to CT. 


Usage of CT has increased dramatically over the last two decades . An estimated 72 million scans were 

performed in the United States in 2007. In Calgary Canada 12.1% of people who present to the emergency with 

an urgent complaint received a CT scan, most commonly either of the head or the abdomen. The percentage who 

received CT however varied markedly by the emergency physician who saw them from 1.8% to 25%. 


X-ray slice data is generated using an X-ray source that rotates around the object; X-ray sensors are positioned on the 
opposite side of the circle from the X-ray source. The earliest sensors were scintillation detectors, with 
photomultiplier tubes excited by (typically) cesium iodide crystals. Cesium iodide was replaced during the 1980s by 
ion chambers containing high pressure Xenon gas. These systems were in turn replaced by scintillation systems 
based on photo diodes instead of photomultipliers and modern scintillation materials with more desirable 
characteristics. Many data scans are progressively taken as the object is gradually passed through the gantry. 

Newer machines with faster computer systems and newer software strategies can process not only individual cross 
sections but continuously changing cross sections as the gantry, with the object to be imaged, is slowly and smoothly 
slid through the X-ray circle. These are called helical or spiral CT machines. Their computer systems integrate the 
data of the moving individual slices to generate three dimensional volumetric information (3D-CT scan), in turn 
viewable from multiple different perspectives on attached CT workstation monitors. This type of data acquisition 
requires enormous processing power, as the data are arriving in a continuous stream and must be processed in 

In conventional CT machines, an X-ray tube and detector are physically rotated behind a circular shroud (see the 
image above right); in the electron beam tomography (EBT) the tube is far larger and higher power to support the 
high temporal resolution. The electron beam is deflected in a hollow funnel-shaped vacuum chamber. X-rays are 
generated when the beam hits the stationary target. The detector is also stationary. This arrangement can result in 
very fast scans, but is extremely expensive. 

X-ray computed tomography 


CT scanner with cover removed to show the 
principle of operation 

CT is used in medicine as a diagnostic tool and as a guide for 
interventional procedures. Sometimes contrast materials such as 
intravenous iodinated contrast are used. This is useful to highlight 
structures such as blood vessels that otherwise would be difficult to 
delineate from their surroundings. Using contrast material can also 
help to obtain functional information about tissues. 

Once the scan data has been acquired, the data must be processed using 

a form of tomographic reconstruction, which produces a series of 

cross-sectional images. The most common technique in general use is 

filtered back projection, which is straight-forward to implement and 

can be computed rapidly. Mathematically, this method is based on the 

Radon transform. However, this is not the only technique available: the 

original EMI scanner solved the tomographic reconstruction problem 

by linear algebra, but this approach was limited by its high 

computational complexity, especially given the computer technology available at the time. More recently, 

manufacturers have developed iterative physical model-based expectation-maximization techniques. These 

techniques are advantageous because they use an internal model of the scanner's physical properties and of the 

physical laws of X-ray interactions. By contrast, earlier methods have assumed a perfect scanner and highly 

simplified physics, which leads to a number of artefacts and reduced resolution - the result is images with improved 

resolution, reduced noise and fewer artefacts, as well as the ability to greatly reduce the radiation dose in certain 

circumstances. The disadvantage is a very high computational requirement, which is at the limits of practicality for 

current scan protocols. 

Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is 
displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3071 (most 
attenuating) to -1024 (least attenuating) on the Hounsfield scale. Pixel is a two dimensional unit based on the matrix 
size and the field of view. When the CT slice thickness is also factored in, the unit is known as a Voxel, which is a 
three dimensional unit. The phenomenon that one part of the detector cannot differentiate between different tissues is 
called the "Partial Volume Effect". That means that a big amount of cartilage and a thin layer of compact bone can 
cause the same attenuation in a voxel as hyperdense cartilage alone. Water has an attenuation of Hounsfield units 
(HU) while air is -1000 HU, cancellous bone is typically +400 HU, cranial bone can reach 2000 HU or more (os 
temporale) and can cause artifacts. The attenuation of metallic implants depends on atomic number of the element 
used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is therefore 
responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between 
low- and high-density materials, which results in data values that exceed the dynamic range of the processing 

X-ray computed tomography 


Example of beam hardening 


Although CT is a relatively accurate test, it is liable to produce artifacts, such as the following: ' ap ers 

• Aliasing artifact or streaks 

These appear as dark lines which radiate away from sharp corners. It 
occurs because it is impossible for the scanner to "sample" or take 
enough projections of the object, which is usually metallic. It can also 
occur when an insufficient X-ray tube current is selected, and 
insufficient penetration of the x-ray occurs. These artifacts are also 
closely tied to motion during a scan. This type of artifact commonly 
occurs in head images around the pituitary fossa area. 

• Partial volume effect 

This appears as "blurring" over sharp edges. It is due to the scanner being unable to differentiate between a small 
amount of high-density material (e.g. bone) and a larger amount of lower density (e.g., cartilage). The processor tries 
to average out the two densities or structures, and information is lost. This can be partially overcome by scanning 
using thinner slices. 

• Ring artifact 

Probably the most common mechanical artifact, the image of one or many "rings" appears within an image. This is 
usually due to a detector fault. 

• Noise artifact 

This appears as graining on the image and is caused by a low signal to noise ratio. This occurs more commonly when 
a thin slice thickness is used. It can also occur when the power supplied to the X-ray tube is insufficient to penetrate 
the anatomy. 

• Motion artifact 

This is seen as blurring and/or streaking which is caused by movement of the object being imaged. 

• Windmill 

Streaking appearances can occur when the detectors intersect the reconstruction plane. This can be reduced with 
filters or a reduction in pitch. 

• Beam hardening 

This can give a "cupped appearance". It occurs when there is more attenuation in the center of the object than around 
the edge. This is easily corrected by filtration and software. 

X-ray computed tomography 


Three-dimensional (3D) image reconstruction 

The principle 

Because contemporary CT scanners offer isotropic or near isotropic, resolution, display of images does not need to 
be restricted to the conventional axial images. Instead, it is possible for a software program to build a volume by 
"stacking" the individual slices one on top of the other. The program may then display the volume in an alternative 

Multiplanar reconstruction 

Multiplanar reconstruction (MPR) is the simplest method of 
reconstruction. A volume is built by stacking the axial slices. The 
software then cuts slices through the volume in a different plane 
(usually orthogonal). Optionally, a special projection method, such as 
maximum-intensity projection (MIP) or minimum-intensity projection 
(mlP), can be used to build the reconstructed slices. 

MPR is frequently used for examining the spine. Axial images through 
the spine will only show one vertebral body at a time and cannot 
reliably show the intervertebral discs. By reformatting the volume, it 
becomes much easier to visualise the position of one vertebral body in 
relation to the others. 

Modern software allows reconstruction in non-orthogonal (oblique) 
planes so that the optimal plane can be chosen to display an anatomical 
structure. This may be particularly useful for visualising the structure 
of the bronchi as these do not lie orthogonal to the direction of the scan. 

For vascular imaging, curved-plane reconstruction can be performed. This allows bends in a vessel to be 
"straightened" so that the entire length can be visualised on one image, or a short series of images. Once a vessel has 
been "straightened" in this way, quantitative measurements of length and cross sectional area can be made, so that 
surgery or interventional treatment can be planned. 

MIP reconstructions enhance areas of high radiodensity, and so are useful for angiographic studies. mlP 
reconstructions tend to enhance air spaces so are useful for assessing lung structure. 

Typical screen layout for diagnostic software, 
showing one 3D and three MPR views 

3D rendering techniques 

Surface rendering 

A threshold value of radiodensity is set by the operator (e.g. a level that corresponds to bone). From this, a 
three-dimensional model can be constructed using edge detection image processing algorithms and displayed 
on screen. Multiple models can be constructed from various different thresholds, allowing different colors to 
represent each anatomical component such as bone, muscle, and cartilage. However, the interior structure of 
each element is not visible in this mode of operation. 

Volume rendering 

Surface rendering is limited in that it will only display surfaces which meet a threshold density, and will only 
display the surface that is closest to the imaginary viewer. In volume rendering, transparency and colors are 
used to allow a better representation of the volume to be shown in a single image — e.g. the bones of the pelvis 
could be displayed as semi-transparent, so that even at an oblique angle, one part of the image does not 
conceal another. 

X-ray computed tomography 


Image segmentation 

Where different structures have similar radiodensity, it can become impossible to separate them simply by adjusting 
volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove 
the unwanted structures from the image. 


Some slices of a cranial CT scan are shown below. The bones are whiter than the surrounding area. (Whiter means 
higher attenuation.) Note the blood vessels (arrowed) showing brightly due to the injection of an iodine-based 
contrast agent. 

A volume rendering of this volume clearly shows the high density 












| * • • 

Computed tomography of human brain, from 

base of the skull to top. Taken with intravenous 

contrast medium. 

After using a segmentation tool to remove the bone, the previously 
concealed vessels can now be demonstrated. 

Bone reconstructed in 3D 

X-ray computed tomography 


Industrial Computed Tomography 

Industrial CT Scanning (Industrial Computed Tomography) is a 
process which utilizes x-ray equipment to produce 3D representations 
of components both externally and internally. Industrial CT scanning 
has been utilized in many areas of industry for internal inspection of 
components. Some of the key uses for CT scanning have been flaw 
detection, failure analysis, metrology, assembly analysis and reverse 
engineering applications 

See also 

• Virtopsy 

• Xenon-enhanced CT scanning 

Brain vessels reconstructed in 3D after bone has 
been removed by segmentation 

X-ray microtomography 


[I] "computed tomography — Definition from the Merriam-Webster Online Dictionary" ( 
computed+tomography). . Retrieved 2009-08-18. 

[2] Herman, G. T., Fundamentals of computerized tomography: Image reconstruction from projection, 2nd edition, Springer, 2009 
[3] Smith-Bindman R, Lipson J, Marcus R, et al. (December 2009). "Radiation dose associated with common computed tomography 

examinations and the associated lifetime attributable risk of cancer". Arch. Intern. Med. 169 (22): 2078—86. 

doi:10.1001/archinternmed.2009.427. PMID 20008690. 
[4] Berrington de Gonzalez A, Mahesh M, Kim KP, et al. (December 2009). "Projected cancer risks from computed tomographic scans 

performed in the United States in 2007". Arch. Intern. Med. 169 (22): 2071-7. doi:10.1001/archinternmed.2009.440. PMID 20008689. 
[5] MeSH Tomography, +X-Ray+Computed (http://www.nlm.,+X-Ray+ 

[6] Allen M.Cormack: My Connection with the Radon Transform, in: 75 Years of Radon Transform, S. Gindikin and P. Michor, eds., 

International Press Incorporated (1994), pp. 32 - 35, ISBN 1-57146-008-X 
[7] Richmond, Caroline (September 18, 2004). "Obituary — Sir Godfrey Hounsfield" ( 

BMJ (London, UK: BMJ Group) 2004:329:687 (18 September 2004). . Retrieved September 12, 2008. 
[8] (http://bjr.birjournals.Org/cgi/reprint/79/937/5. pdf)BECKMANN, E. C. (January 2006). "CT scanning the early days". TheBritish 

Journal of Radiology 79: 5-8. doi:10.1259/bjr/29444122. 
[9] "The Beatles greatest gift... is to science" (http://www.whittington.nhs. uk/default.asp?c=2804&t=l). Whittington Hospital NHS Trust. . 

Retrieved 2007-05-07. 
[10] Filler, AG (2009): The history, development, and impact of computed imaging in neurological diagnosis and neurosurgery: CT, MRI, DTI: 

Nature Precedings DOI: 10.1038/npre.2009.3267.5 (http://precedings.nature.eom/documents/3267/version/5). 

[II] Novelline, Robert. Squire's Fundamentals of Radiology. Harvard University Press. 5th edition. 1997. ISBN 0-674-83339-2. 

[12] Hart, D; Wall B F (2002). "Radiation exposure of the UK population from Medical and Dental X-ray examinations" ( 

uk/radiation/publications/w_series_reports/2002/nrpb_w4.pdf) ( — b ( uk/scholar?hl=en&lr=& 


as_publication=NRPB+report+W-4&as_ylo=2002&as_yhi=2002&btnG=Search)). NRPB report W-4. . 
[13] Lee HK, Park SJ, Yi BH. Multidetector CT reveals diverse variety of abdominal hernias, ( 

article/113619/1575055) Diagnostic Imaging. 2010;32(5):27-31. 
[14] "Ankle Fractures" (http://orthoinfo.aaos. org/topic.cfm?topic=A00391). . American Association of Orthopedic 

Surgeons. . Retrieved 2010-05-30. 
[15] Buckwalter, Kenneth A. (11 September 2000). "Musculoskeletal Imaging with Multislice CT" ( 

content/full/ 176/4/979). American Journal of Roentgenology. . Retrieved 2010-05-22. 
[16] Hart, D.; Wall (2004). "UK population dose from medical X-ray examinations" ( 

S0720048X03001785). European Journal of Radiology 50 (3): 285-291. doi:10.1016/S0720-048X(03)00178-5. PMID 15145489. . 
[17] Brenner DJ, Hall EJ (November 2007). "Computed tomography — an increasing source of radiation exposure" ( 

cgi/pmidlookup?view=short&pmid=18046031&promo=ONFLNS19). N. Engl. J. Med. 357 (22): 2277-84. doi:10.1056/NEJMra072149. 

X-ray computed tomography 63 

PMID 18046031.. 
[18] Donnelly, Lane F.; et al (1 February 2001). "Minimizing Radiation Dose for Pediatric Body Applications of Single-Detector Helical CT" 

(http://www.ajronline.Org/cgi/reprint/176/2/303). American Journal of Roentgenology 176 (2): 303—6. PMID 11159061. . 
[19] Brian R. Subach M.D., F.A.C.S et. al. "Reliability and accuracy of fine-cut computed tomography scans to determine the status of anterior 

interbody fusions with metallic cages" ( 


The Spine Journal 2008 Nov-Dec;8(6):998-1002. 
[20] Brenner, David J.; et al. (1 February 2001). "Estimated Risks of Radiation-Induced Fatal Cancer from Pediatric CT" (http://www.ajronline. 

org/cgi/content/abstract/176/2/289). American Journal of Roentgenology 176 (176): 289-296. PMID 1 1 159059. . 
[21] Brenner D, Elliston C, Hall E, Berdon W (February 2001). "Estimated risks of radiation-induced fatal cancer from pediatric CT" (http:// 1 159059). AJR Am J Roentgenol 176 (2): 289-96. PMID 1 1 159059. . 
[22] Roxanne Nelson (December 17, 2009). "Thousands of New Cancers Predicted Due to Increased Use of CT" ( 

viewarticle/7 14025). Medscape. . Retrieved January 2, 2010. 
[23] Semelka, RC; Armao, DM; Elias, J, Jr.; Huda, W. (May 2007). "Imaging strategies to reduce the risk of radiation in CT studies, including 

selective substitution with MRL". J Magn Reson Imaging 25 (5): 900—9. 
[24] Khamsi, Roxanne (2007). New Scientist ( 

dnll827-ct-scan-radiation-can-equal-nuclear-bomb-exposure-.html). 11 May 2007. . 
[25] Larson DB, Rader SB, Forman HP, Fenton LZ (August 2007). "Informing parents about CT radiation exposure in children: it's OK to tell 

them" ( AJR Am J Roentgenol 189 (2): 271—5. 

doi: 10.22 14/AJR.07.2248. PMID 17646450. . 
[26] Shrimpton, P.C; Miller, H.C; Lewis, M.A; Dunn, M. Doses from Computed Tomography (CT) examinations in the UK - 2003 Review 

[27] "Radiation Exposure during Cardiac CT: Effective Doses at Multi— Detector Row CT and Electron-Beam CT" (http://radiology.rsnajnls. 

org/cgi/content/abstract/226/1/145). 2002-11-21. . Retrieved 2009-10-13. 
[28] Simpson, Graham (2009). "Thoracic computed tomography: principles and practice" ( 

articles/1036. pdf) (PDF). Australian Prescriber, 32:4. Retrieved September 25, 2009. 
[29] Smith-Bindman R, Lipson J, Marcus R, et al. (December 2009). "Radiation dose associated with common computed tomography 

examinations and the associated lifetime attributable risk of cancer". Arch. Intern. Med. 169 (22): 2078—86. 

doi:10.1001/archinternmed.2009.427. PMID 20008690. 
[30] Berrington de Gonzalez A, Mahesh M, Kim KP, et al. (December 2009). "Projected cancer risks from computed tomographic scans 

performed in the United States in 2007". Arch. Intern. Med. 169 (22): 2071-7. doi:10.1001/archinternmed.2009.440. PMID 20008689. 
[31] Andrew Skelly (Aug 3 2010). "CT ordering all over the map". The Medical Post. 
[32] Udupa, J.K. and Herman, G. T., 3D Imaging in Medicine, 2nd Edition, CRC Press, 2000 

External links 

• Open-source computed tomography simulator with educational tracing displays ( 

• Free software for viewing CT and other medical imaging files ( 

• CT Artefacts ( by David Platten 

• DigiMorph ( A library of 3D imagery based on CT scans of the internal and external 
structure of living and extinct plants and animals. 

• MicroCT and calcified tissues ( A 
website dedicated to microCT in the microscopic analysis of calcified tissues. 

• Free Radiology Resource for Radiologists, Radiographers, and Technical Assistance ( 

• Radiation Risk Calculator ( Calculate cancer risk from CT scans and xrays. 

• CT scanner video - gantry ( 

• CT in your clinical practice (http://www.ajronline.Org/cgi/data/183/3/DCl/l) by Gregory J. Kohs and Joel 

• Coronary CT angiography by Eugene Lin ( 

• CT physics lecture ( 
html?task=viewvideo&video_id=122) excellent video lectures about physics in computed tomography 

• Video documentary of patient getting a CT Scan ( 




Angiography or arteriography is a medical imaging 
technique used to visualize the inside, or lumen, of 
blood vessels and organs of the body, with particular 
interest in the arteries, veins and the heart chambers. 
This is traditionally done by injecting a radio-opaque 
contrast agent into the blood vessel and imaging using 
X-ray based techniques such as fluoroscopy. The word 
itself comes from the Greek words angeion, "vessel", 
and graphein, "to write or record". The film or image 
of the blood vessels is called an angiograph, or more 
commonly, an angiogram. 

The term angiography is strictly defined as based on 
projectional radiography; however, the term has been 
applied to newer vascular imaging techniques such as 
CT angiography and MR angiography. The term 
isotope angiography has also been used, although this 
more correctly is referred to as isotope perfusion 

Angiogram showing a transverse projection of the vertebrobasilar 
and posterior cerebral circulation. 


The technique was first developed in 1927 by the Portuguese physician and neurologist Egas Moniz to provide 
contrasted x-ray cerebral angiography in order to diagnose several kinds of nervous diseases, such as tumors, 
coronary heart disease and arteriovenous malformations. He is usually recognized as one of the pioneers in this field. 
Moniz performed the first cerebral angiogram in Lisbon in 1927, and Reynaldo Cid dos Santos performed the first 
aortogram in the same city in 1929. With the introduction of the Seldinger technique in 1953, the procedure became 
markedly safer as no sharp tools introductory devices needed to remain inside the vascular lumen. 




Depending on the type of angiogram, access to the 
blood vessels is gained most commonly through the 
femoral artery, to look at the left side of the heart and 
the arterial system or the jugular or femoral vein, to 
look at the right side of the heart and the venous 
system. Using a system of guide wires and catheters, a 
type of contrast agent (which shows up by absorbing 
the x-rays), is added to the blood to make it visible on 
the x-ray images. 

The X-ray images taken may either be still images, 

displayed on a image intensifier or film, or motion 

images. For all structures except the heart, the images 

are usually taken using a technique called digital 

subtraction angiography (DSA). Images in this case are 

usually taken at 2 - 3 frames per second, which allows 

the radiologist to evaluate the flow of the blood through 

a vessel or vessels. This technique "subtracts" the bones 

and other organs so only the vessels filled with contrast 

agent can be seen. The heart images are taken at 15-30 

frames per second, not using a subtraction technique. 

Because DSA requires the patient to remain motionless, 

it cannot be used on the heart. Both these techniques 

enable the radiologist or cardiologist to see stenosis (blockages or narrowings) inside the vessel which may be 

inhibiting the flow of blood and causing pain. 

Color (DSA) 

Color (DSA) - provides a dynamic flow evaluation, greater understanding of the contrast flow within the pathology. 
Assists the clinician in planning of surgical procedures, and clearly demonstrate post-procedural results. 


3D Angiography - provides the radiologist with a 3D view of vascular structures. 3D view - helps determine spatial 
layout vascular structures and simplifies planning of surgical procedures. 


Coronary Angiography 

One of most common angiograms performed is to visualize the blood in the coronary arteries. A long, thin, flexible 
tube called a catheter is used to administer the x-ray contrast agent at the desired area to be visualized. The catheter 
is threaded into an artery in the forearm, and the tip is advanced through the arterial system into the major coronary 
artery. X-ray images of the transient radiocontrast distribution within the blood flowing within the coronary arteries 
allows visualization of the size of the artery openings. Presence or absence of atherosclerosis or atheroma within the 
walls of the arteries cannot be clearly determined. See coronary catheterization for more detail.. 

Angiography 66 


[Microangiography] is commonly used to visualize tiny blood vessels. 

Neuro- vascular angiography 

Another increasingly common angiographic procedure is neuro-vascular digital subtraction angiography in order to 
visualise the arterial and venous supply to the brain. Intervention work such as coil-embolisation of aneurysms and 
AVM gluing can also be performed. 

Peripheral Angiography 

Angiography is also commonly performed to identify vessel narrowing in patients with leg claudication or cramps, 
caused by reduced blood flow down the legs and to the feet; in patients with renal stenosis (which commonly causes 
high blood pressure) and can be used in the head to find and repair stroke. These are all done routinely through the 
femoral artery, but can also be performed through the brachial or axillary (arm) artery. Any stenoses found may be 
treated by the use of atherectomy. 


Other angiographic uses include the diagnosis of retinal vascular disorders, such as diabetic retinopathy and macular 

Coronary Angiography 

Coronary angiographys are common and major complications are rare. These include Cardiac arrhythmias, kidney 
damage, blood clots (which can cause heart attack or stroke), hypotension and pericardial effusion. Minor 
complications can include bleeding or bruising at the site where the contrast is injected, blood vessel damage on the 
route to the heart from the catheter (rare) and allergic reaction to the contrast. 

Cerebral Angiography 

Major complications in Cerebral Angiography are also rare but include stroke, an allergic reaction to the anaesthetic 
other medication or the contrast medium, blockage or damage to one of the access veins in the leg, or thrombosis and 
embolism formation. Bleeding or bruising at the site where the contrast is injected are minor complications, delayed 
bleeding can also occur but is rare. 

See also 

Cardiac catheterization 

Computed Tomography Angiography 

Contrast Medium 



Image intensifier 

Intravenous digital subtraction angiography 

Peripheral artery occlusive disease 



External links 


• MDCT - The radiology information resource for Cardiologists, Radiologists, Cardiac and Radiologic Techs - A 

Free Resource For Cardiac Imaging 


• Radiologylnfo - The radiology information resource for patients: Angiography procedures 

[4] '" 

Cardiac Catheterization from Angioplasty. Org 
Angiography Equipment from Siemens Medical 
Cardiovascular and Interventional Radiological Society of Europe 
[7] Coronary CT angiography by Eugene Lin 



[1] "Angiography - Complications" ( Health A-Z. NHS Choices. 

2009-06-01. . Retrieved 2010-03-24. 

[3] cfm?modal=Angio 
[5] http://www. medical. Siemens. com/webapp/wcs/stores/servlet/CategoryDisplay?categoryId=12751&langId=-l&catalogId=-l& 

[6] http://www.drse. org/index.php?pid=85 

Coronary catheterization 

A coronary catheterization is a minimally 
invasive procedure to access the coronary 
circulation and blood filled chambers of the 
heart using a catheter. It is performed for 
both diagnostic and interventional 
(treatment) purposes. 

Coronary catheterization is one of the 
several cardiology diagnostic tests and 
procedures. Specifically, coronary 

catheterization is a visually interpreted test 
performed to recognize occlusion, stenosis, 
restenosis, thrombosis or aneurysmal 
enlargement of the coronary artery lumens; 
heart chamber size; heart muscle contraction 
performance; and some aspects of heart 
valve function. Important internal heart and 
lung blood pressures, not measurable from 
outside the body, can be accurately 
measured during the test. The relevant 
problems that the test deals with most 

A coronary angiogram (an X-ray with radio-opaque contrast in the coronary 

arteries) that shows the left coronary circulation. The distal left main coronary 

artery (LMCA) is in the left upper quadrant of the image. Its main branches (also 

visible) are the left circumflex artery (LCX), which courses top-to-bottom initially 

and then toward the centre/bottom, and the left anterior descending (LAD) artery, 

which courses from left-to-right on the image and then courses down the middle of 

the image to project underneath of the distal LCX. The LAD, as is usual, has two 

large diagonal branches, which arise at the centre-top of the image and course 

toward the centre/right of the image. 

commonly occur as a result of advanced 

atherosclerosis — atheroma activity within the wall of the coronary arteries. Less frequently, valvular, heart muscle, 

or arrhythmia issues are the primary focus of the test. 

Coronary catheterization 68 

Coronary artery luminal narrowing reduces the flow reserve for oxygenated blood to the heart, typically producing 
intermittent angina. Very advanced luminal occlusion usually produces a heart attack. However, it has been 
increasingly recognized, since the late 1980s, that coronary catheterization does not allow the recognition of the 
presence or absence of coronary atherosclerosis itself, only significant luminal changes which have occurred as a 
result of end stage complications of the atherosclerotic process. See IVUS and atheroma for a better understanding of 
this issue. 


The technique of angiography was first developed in 1927 by the Portuguese physician Egas Moniz to provide 
contrasted x-ray in order to diagnose nervous diseases, such as tumors, coronary heart disease and arteriovenous 
malformations. He is recognized as one of the pioneers in this field. 

Coronary catheterization was further explored in 1929 when the German physician Werner Forssmann inserted a 
plastic tube in his cubital vein and guided it to the right chamber of the heart. He took an x-ray to prove his success 
and published it on November 5 1929 with the title "Uber die Sondierung des rechten Herzens" (About probing of 
the right heart). The coronarography of the left heart was introduced in 1953 with the report by a Portuguese group, 
published in Cardiologia, International Archives of Cardiology volume 22, pages 45-61, by E. Coelho et al., entitled 
L'arteriograpie des coronaires chez l'homme vivant. They were the first to inject radiocontrast in the coronary 
arteries. In 1954 George C. Willis injected the coronary arteries with contrast medium in order to monitor the 
reversal of coronary artery disease after proving to his own satisfaction, that Guinea pigs suffered from the same 
scurvy related coronary atherosclerosis as humans lacking vitamin C. He successfully demonstrated reversal of 
coronary artery disease or its arrestation in 70% of his sample in the St. Anne's and Queen Mary VA hospitals of 
Montreal. His paper "Serial arteriography in atherosclerosis" (GC. Willis, A.W. Light, W.S. Cow. Canadian Med. 
Assn. J. Dec. 1954. Vol 71 Pp. 562-568) is the definitive first report of success in reversing the human disease with 
vitamin C rendering the later development of the coronary bypass unnecessary. In 1960 F. Mason Sones, a pediatric 
cardiologist at the Cleveland Clinic, accidentally injected radiocontrast in a coronary artery instead of the left 
ventricle. Although the patient had a reversible cardiac arrest, Sones and Shirey developed the procedure further, and 
are credited with the discovery (Connolly 2002); they published a series of 1,000 patents in 1966 (Proudfit et al). 

Since the late 1970s, building on the pioneering work of Charles Dotter in 1964 and especially Andreas Gruentzig 
starting in 1977, coronary catheterization has been extended to therapeutic uses: (a) the performance of less invasive 
physical treatment for angina and some of the complications of severe atherosclerosis, (b) treating heart attacks 
before complete damage has occurred and (c) research for better understanding of the pathology of coronary artery 
disease and atherosclerosis. Today, and more ethically utilising the definitive work of Michelson, Morganroth, 
Nichols and MacVaugh (Arch Intern Med. 1979 Oct; 139(10): 11 39-41) who established the closest possible 
relationship between coronary and retinal atherosclerosis, it might be judged more prudent to determine, using the 
fundus CardioRetinometry of Bush, whether or not and how quickly arterial disease can be reversed, before 
embarking on cardio-thoracic surgery. 

In the early 1960s, cardiac catheterization frequently took several hours and involved significant complications for as 
many as 2—3% of patients. With multiple incremental improvements over time, simple coronary catheterization 
examinations are now commonly done more rapidly and with significantly improved outcomes. 

Coronary catheterization 69 

Patient participation 

The patient being examined or treated is usually awake during coronary catheterization, ideally with only local 
anaesthesia such as lidocaine and minimal general sedation, throughout the procedure. Performing the procedure 
with the patient awake is safer as the patient can immediately report any discomfort or problems and thereby 
facilitate rapid correction of any undesirable events. Medical monitors fail to give a comprehensive view of the 
patient's immediate well-being; how the patient feels is often a most reliable indicator of procedural safety. 

Death, myocardial infarction, stroke, serious ventricular arrhythmia, and major vascular complications each occur in 
less than 1% of patients undergoing catheterization. However, though the imaging portion of the examination is 
often brief, because of setup and safety issues the patient is often in the lab for 20—45 minutes. Any of multiple 
technical difficulties, while not endangering the patient (indeed added to protect the patient's interests) can 
significantly increase the examination time. 


Coronary catheterization is performed in a cardiac catheterization lab, usually located within a hospital. With current 
designs, the patient must lay relatively flat on a narrow, minimally padded, radiolucent (transparent to X-ray) table. 
The X-Ray source and imaging camera equipment are on opposite sides of the patient's chest and freely move, under 
motorized control, around the patient's chest so images can be taken quickly from multiple angles. More advanced 
equipment, termed a bi-plane cath lab, uses two sets of X-Ray source and imaging cameras, each free to move 
independently, which allows two sets of images to be taken with each injection of radiocontrast agent. 

The equipment and installation setup to perform such testing typically represents a capital expenditure of US$2— 5 
million (2004), sometimes more, partially repeated every few years. 

Diagnostic procedures 

During coronary catheterization (often referred to as a cath by physicians), blood pressures are recorded and X-Ray 
motion picture shadow-grams of the blood inside the coronary arteries are recorded. In order to create the X-ray 
pictures, a physician guides a small tube-like device called a catheter, typically -2.0 mm (6-French) in diameter, 
through the large arteries of the body until the tip is just within the opening of one of the coronary arteries. By 
design, the catheter is smaller than the lumen of the artery it is placed in; internal/intraarterial blood pressures are 
monitored through the catheter to verify that the catheter does not block blood flow. 

The catheter is itself designed to be radiodense for visibility and it allows a clear, watery, blood compatible 
radiocontrast agent, commonly called an X-ray dye, to be selectively injected and mixed with the blood flowing 
within the artery. Typically 3—8 cc of the radiocontrast agent is injected for each image to make the blood flow 
visible for about 3—5 seconds as the radiocontrast agent is rapidly washed away into the coronary capillaries and then 
coronary veins. Without the X-ray dye injection, the blood and surrounding heart tissues appear, on X-ray, as only a 
mildly-shape-changing, otherwise uniform water density mass; no details of the blood and internal organ structure 
are discernible. The radiocontrast within the blood allows visualization of the blood flow within the arteries or heart 
chambers, depending on where it is injected. 

If atheroma, or clots, are protruding into the lumen, producing narrowing, the narrowing may be seen instead as 
increased haziness within the X-ray shadow images of the blood/dye column within that portion of the artery; this is 
as compared to adjacent, presumed healthier, less stenotic areas. See the single frame illustration of an coronary 
angiogram image on the angioplasty page. 

For guidance regarding catheter positions during the examination, the physician mostly relies on detailed knowledge 
of internal anatomy, guide wire and catheter behavior and intermittently, briefly uses fluoroscopy and a low X-ray 
dose to visualize when needed. This is done without saving recordings of these brief looks. When the physician is 
ready to record diagnostic views, which are saved and can be more carefully scrutinized later, he activates the 

Coronary catheterization 70 

equipment to apply a significantly higher X-ray dose, termed cine, in order to create better quality motion picture 
images, having sharper radiodensity contrast, typically at 30 frames per second. The physician controls both the 
contrast injection, fluoroscopy and cine application timing so as to minimize the total amount of radiocontrast 
injected and times the X-Ray to the injection so as to minimize the total amount of X-ray used. Doses of 
radiocontrast agents and X-ray exposure times are routinely recorded in an effort to maximize safety. 

Though not the focus of the test, calcification within the artery walls, located in the outer edges of atheroma within 
the artery walls, is sometimes recognizable on fluoroscopy (without contrast injection) as radiodense halo rings 
partially encircling, and separated from the blood filled lumen by the interceding radiolucent atheroma tissue and 
endothelial lining. Calcification, even though usually present, is usually only visible when quite advanced and 
calcified sections of the artery wall happen to be viewed on end tangentially through multiple rings of calcification, 
so as to create enough radiodensity to be visible on fluoroscopy. 

Therapeutic procedures 

By changing the diagnostic catheter to a guiding catheter, physicians can also pass a variety of instruments through 
the catheter and into the artery to a lesion site. The most commonly used are 0.014-inch-diameter (0.36 mm) guide 
wires and the balloon dilation catheters. 

By injecting radiocontrast agent through a tiny passage extending down the balloon catheter and into the balloon, the 
balloon is progressively expanded. The hydraulic pressures are chosen and applied by the physician, according to 
how the balloon within the stenosis responds. The radiocontrast filled balloon is watched under fluoroscopy (it 
typically assumes a "dog bone" shape imposed on the outside of the balloon by the stenosis as the balloon is 
expanded), as it opens. As much hydraulic brute force is applied as judged needed and visualized to be effective to 
make the stenosis of the artery lumen visibly enlarge. 

Typical normal coronary artery pressures are in the <200 mmHg range (27 kPa). The hydraulic pressures applied 
within the balloon may extend to as high as 19000 mmHg (2,500 kPa). Prevention of over-enlargement is achieved 
by choosing balloons manufactured out of high tensile strength clear plastic membranes. The balloon is initially 
folded around the catheter, near the tip, to create a small cross-sectional profile to facilitate passage though luminal 
stenotic areas and designed to inflate to a specific pre-designed diameter. If over inflated, the balloon material simply 
tears and allows the inflating radiocontrast agent to simply escape into the blood. 

Additionally, several other devices can be advanced into the artery via a guiding catheter. These include laser 
catheters, stent catheters, IVUS catheters, Doppler catheter, pressure or temperature measurement catheter and 
various clot and grinding or removal devices. Most of these devices have turned out to be niche devices, only useful 
in a small percentage of situations or for research. 

Stents, which are specially manufactured expandable stainless steel mesh tubes, mounted on a balloon catheter, are 
the most commonly used device beyond the balloon catheter. When the stent/balloon device positioned within the 
stenosis, the balloon is inflated which, in turn, expands the stent and the artery. The balloon is removed and the stent 
remains in place, supporting the inner artery walls in the more open, dilated position. Current stents generally cost 
around $1,000 to 3,000 each (US 2004 dollars), the drug coated ones being the more expensive. 

Advances in catheter based physical treatments 

Interventional procedures have been plagued by restenosis due to the formation of endothelial tissue overgrowth at 
the lesion site. Restenosis is the body's response to the injury of the vessel wall from angioplasty and to the stent as a 
foreign body. As assessed in clinical trials during the late 1980 and 1990s, using only balloon angioplasty (POBA, 
plain old balloon angioplasty), up to 50% of patients suffered significant restenosis but that percentage has dropped 
to the single to lower two digit range with the introduction of drug-eluting stents. Sirolimus, paclitaxel and 
everolimus are the three drugs used in coatings which are currently FDA approved in the United States. As opposed 

Coronary catheterization 7 1 

to bare metal, drug eluting stents are covered with a medicine that is slowly dispersed with the goal of suppressing 
the restenosis reaction. The key to the success of drug coating has been (a) choosing effective agents, (b) developing 
ways of adequately binding the drugs to the stainless surface of the stent struts (the coating must stay bound despite 
marked handling and stent deformation stresses) and (c) developing coating controlled release mechanisms that 
release the drug slowly over about 30 days. 

See also 

• Angiography 

• Interventional cardiology 

• Fractional flow reserve 



[1] Hurst, J. Willis; Fuster, Valentin; O'Rourke, Robert A. (2004). Hurst's The Heart ( 
pg=RA2-PA481#PRA2-PA489). New York: McGraw-Hill, Medical Publishing Division, pp. 489-90. ISBN 0-07-142264-1. . 


• Connolly JE. The development of coronary artery surgery: personal recollections. Tex Heart Inst J 2002;29: 10-4. 
PMID 11995842. 

• Proudfit WL, Shirey EK, Sones FM Jr. Selective cine coronary arteriography. Correlation with clinical findings in 
1,000 patients. Circulation 1966;33:901-10. PMID 5942973. 

• Sones FM, Shirey EK. Cine coronary arteriography. Mod Concepts Cardiovasc Dis 1962;31:735-8. PMID 

• ( Coronary CT angiography by Eugene Lin 

PET Scanning 


PET Scanning 

Image of a typical positron emission tomography (PET) facility 

Positron emission tomography (PET) is a nuclear 
medicine imaging technique which produces a 
three-dimensional image or picture of functional 
processes in the body. The system detects pairs of 
gamma rays emitted indirectly by a positron-emitting 
radionuclide (tracer), which is introduced into the body 
on a biologically active molecule. Images of tracer 
concentration in 3-dimensional or 4-dimensional space 
(the 4th dimension being time) within the body are then 
reconstructed by computer analysis. In modern 
scanners, this reconstruction is often accomplished with 
the aid of a CT X-ray scan performed on the patient 
during the same session, in the same machine. 

If the biologically active molecule chosen for PET is 
FDG, an analogue of glucose, the concentrations of 
tracer imaged then give tissue metabolic activity, in 
terms of regional glucose uptake. Although use of this 
tracer results in the most common type of PET scan, 
other tracer molecules are used in PET to image the 
tissue concentration of many other types of molecules 
of interest. 


The concept of emission and transmission tomography 

was introduced by David E. Kuhl and Roy Edwards in 

the late 1950s. Their work later led to the design and 

construction of several tomographic instruments at the 

University of Pennsylvania. Tomographic imaging 

techniques were further developed by Michel Ter-Pogossian, Michael E. Phelps and others at the Washington 

University School of Medicine. 

Work by Gordon Brownell, Charles Burnham and their associates at the Massachusetts General Hospital beginning 
in the 1950s contributed significantly to the development of PET technology and included the first demonstration of 
annihilation radiation for medical imaging. Their innovations, including the use of light pipes, and volumetric 
analysis have been important in the deployment of PET imaging. In 1961, James Robertson and his associates at 
Brookhaven National Laboratory built the first single-plane PET scan, nicknamed the "head-shrinker." 

It is interesting that one of the factors most responsible for the acceptance of positron imaging was the development 
of radiopharmaceuticals. In particular, the development of labeled 2-fluorodeoxy-D-glucose (2FDG) by the 
Brookhaven group under the direction of Al Wolf and Joanna Fowler was a major factor in expanding the scope of 
PET imaging. The compound was first administered to two normal human volunteers by Abass Alavi in August 
1976 at the University of Pennsylvania. Brain images obtained with an ordinary (non-PET) nuclear scanner 
demonstrated the concentration of FDG in that organ. Later, the substance was used in dedicated positron 
tomographic scanners, to yield the modern procedure. 

PET/CT-System with 16-slice CT; the ceiling mounted device is an 
injection pump for CT contrast agent 

PET Scanning 


The logical extension of positron instrumentation was a design using two 2-dimensional arrays. PC-I was the first 
instrument using this concept and was designed in 1968, completed in 1969 and reported in 1972. The first 
applications of PC-I in tomographic mode as distinguished from the computed tomographic mode were reported in 
1970. It soon became clear to many of those involved in PET development that a circular or cylindrical array of 

detectors was the logical next step in PET instrumentation. Although many investigators took this approach, James 

T71 -. rsi 

Robertson and Z.H. Cho were the first to propose a ring system which has become the prototype of the current 

shape of PET. 

The PET/CT scanner, attributed to Dr David Townsend and Dr Nutt was named by TIME Magazine as the medical 
invention of the year in 2000. 




Detector Block 

Schematic view of a detector block and ring of a 
PET scanner 

To conduct the scan, a short-lived radioactive tracer isotope is injected 
into the living subject (usually into blood circulation). The tracer is 
chemically incorporated into a biologically active molecule. There is a 
waiting period while the active molecule becomes concentrated in 
tissues of interest; then the subject is placed in the imaging scanner. 
The molecule most commonly used for this purpose is 
fluorodeoxyglucose (FDG), a sugar, for which the waiting period is 
typically an hour. During the scan a record of tissue concentration is made as the tracer decays 

As the radioisotope undergoes positron emission decay (also 

known as positive beta decay), it emits a positron, an antiparticle 

of the electron with opposite charge. The emitted positron travels 

in tissue for a short distance (typically less than 1 mm, but 

dependent on the isotope ), during which time it loses kinetic 

energy, until it decelerates to a point where it can interact with an 

electron. The encounter annihilates both electron and positron, 

producing a pair of annihilation (gamma) photons moving in 

approximately opposite directions. These are detected when they 

reach a scintillator in the scanning device, creating a burst of light 

which is detected by photomultiplier tubes or silicon avalanche 

photodiodes (Si APD). The technique depends on simultaneous or coincident detection of the pair of photons moving 

in approximately opposite direction (it would be exactly opposite in their center of mass frame, but the scanner has 

no way to know this, and so has a built-in slight direction-error tolerance). Photons that do not arrive in temporal 

"pairs" (i.e. within a timing-window of a few nanoseconds) are ignored. 

Schema of a PET acquisition process 

Localization of the positron annihilation event 

The most significant fraction of electron-positron decays result in two 511 keV gamma photons being emitted at 
almost 180 degrees to each other; hence it is possible to localize their source along a straight line of coincidence 
(also called formally the line of response or LOR). In practice the LOR has a finite width as the emitted photons are 
not exactly 180 degrees apart. If the resolving time of the detectors is less than 500 picoseconds rather than about 10 
nanoseconds, it is possible to localize the event to a segment of a chord, whose length is determined by the detector 
timing resolution. As the timing resolution improves, the signal-to-noise ratio (SNR) of the image will improve, 
requiring fewer events to achieve the same image quality. This technology is not yet common, but it is available on 

PET Scanning 


some new systems 


Image reconstruction using coincidence statistics 

More commonly, a technique much like the reconstruction of computed tomography (CT) and single photon 
emission computed tomography (SPECT) data is used, although the data set collected in PET is much poorer than 
CT, so reconstruction techniques are more difficult (see Image reconstruction of PET). 

Using statistics collected from tens-of-thousands of coincidence events, a set of simultaneous equations for the total 
activity of each parcel of tissue along many LORs can be solved by a number of techniques, and thus a map of 
radioactivities as a function of location for parcels or bits of tissue (also called voxels), may be constructed and 
plotted. The resulting map shows the tissues in which the molecular tracer has become concentrated, and can be 
interpreted by a nuclear medicine physician or radiologist in the context of the patient's diagnosis and treatment plan. 

A complete body PET / CT Fusion image 

■ff v, -]M 

■t >< ii 



■LM.J j/B 

I'V **A*Sjt 


^j 11 



K£^"~ t! 

m *' "m 



Combination of PET with CT or MRI 

PET scans are increasingly read alongside CT or magnetic 
resonance imaging (MRI) scans, the combination 
("co-registration") giving both anatomic and metabolic 
information (i.e., what the structure is, and what it is doing 
biochemically). Because PET imaging is most useful in 
combination with anatomical imaging, such as CT, modern PET 
scanners are now available with integrated high-end 
multi-detector-row CT scanners. Because the two scans can be 
performed in immediate sequence during the same session, with 
the patient not changing position between the two types of scans, 
the two sets of images are more-precisely registered, so that areas 
of abnormality on the PET imaging can be more perfectly 
correlated with anatomy on the CT images. This is very useful in 
showing detailed views of moving organs or structures with higher 
anatomical variation, which is more common outside the brain. 

At the Jiilich Institute of Neurosciences and Biophysics, the 
world's largest PET/MRI device began operation in April 2009: a 
9.4-tesla magnetic resonance tomograph (MRT) combined with a 
positron emission tomograph (PET). Presently, only the head and 

brain can be imaged at these high magnetic field strengths 


A Brain PET / MRI Fusion image 


Radionuclides used in PET scanning are typically isotopes with short half lives such as carbon- 11 (-20 min), 
nitrogen-13 (-10 min), oxygen-15 (-2 min), and fluorine-18 (-110 min). These radionuclides are incorporated either 
into compounds normally used by the body such as glucose (or glucose analogues), water or ammonia, or into 
molecules that bind to receptors or other sites of drug action. Such labelled compounds are known as radiotracers. It 
is important to recognize that PET technology can be used to trace the biologic pathway of any compound in living 
humans (and many other species as well), provided it can be radiolabeled with a PET isotope. Thus the specific 
processes that can be probed with PET are virtually limitless, and radiotracers for new target molecules and 
processes are being synthesized all the time; as of this writing there are already dozens in clinical use and hundreds 

PET Scanning 75 

applied in research. Presently, however, by far the most commonly used nuclide in clinical PET scanning is 
fluorine-18 in the form of FDG. 

Due to the short half lives of most radioisotopes, the radiotracers must be produced using a cyclotron in close 
proximity to the PET imaging facility. The half life of fluorine-18 is long enough that it can be manufactured 
commercially at an offsite location. 


The minimization of radiation dose to the subject is an attractive feature of the use of short-lived radionuclides. 
Besides its established role as a diagnostic technique, PET has an expanding role as a method to assess the response 


to therapy, in particular, cancer therapy, where the risk to the patient from lack of knowledge about disease 
progress is much greater than the risk from the test radiation. 

Limitations to the widespread use of PET arise from the high costs of cyclotrons needed to produce the short-lived 
radionuclides for PET scanning and the need for specially adapted on-site chemical synthesis apparatus to produce 
the radiopharmaceuticals. Few hospitals and universities are capable of maintaining such systems, and most clinical 
PET is supported by third-party suppliers of radiotracers which can supply many sites simultaneously. This 
limitation restricts clinical PET primarily to the use of tracers labelled with fluorine-18, which has a half life of 1 10 
minutes and can be transported a reasonable distance before use, or to rubidium-82, which can be created in a 
portable generator and is used for myocardial perfusion studies. Nevertheless, in recent years a few on-site 
cyclotrons with integrated shielding and hot labs have begun to accompany PET units to remote hospitals. The 
presence of the small on-site cyclotron promises to expand in the future as the cyclotrons shrink in response to the 


high cost of isotope transportation to remote PET machines 

Because the half-life of fluorine-18 is about two hours, the prepared dose of a radiopharmaceutical bearing this 
radionuclide will undergo multiple half-lives of decay during the working day. This necessitates frequent 
recalibration of the remaining dose (determination of activity per unit volume) and careful planning with respect to 
patient scheduling. 

Image reconstruction 

The raw data collected by a PET scanner are a list of 'coincidence events' representing near-simultaneous detection 
of annihilation photons by a pair of detectors. Each coincidence event represents a line in space connecting the two 
detectors along which the positron emission occurred. Modern systems with a high time resolution also use a 
technique (called "Time-of-flight") where they more precisely decide the difference in time between the detection of 
the two photons and can thus limit the length of the earlier mentioned line to around 10 cm. 

Coincidence events can be grouped into projections images, called sinograms. The sinograms are sorted by the angle 
of each view and tilt, the latter in 3D case images. The sinogram images are analogous to the projections captured by 
computed tomography (CT) scanners, and can be reconstructed in a similar way. However, the statistics of the data is 
much worse than those obtained through transmission tomography. A normal PET data set has millions of counts for 
the whole acquisition, while the CT can reach a few billion counts. As such, PET data suffer from scatter and 
random events much more dramatically than CT data does. 

In practice, considerable pre-processing of the data is required - correction for random coincidences, estimation and 
subtraction of scattered photons, detector dead-time correction (after the detection of a photon, the detector must 
"cool down" again) and detector-sensitivity correction (for both inherent detector sensitivity and changes in 
sensitivity due to angle of incidence). 

Filtered back projection (FBP) has been frequently used to reconstruct images from the projections. This algorithm 
has the advantage of being simple while having a low requirement for computing resources. However, shot noise in 
the raw data is prominent in the reconstructed images and areas of high tracer uptake tend to form streaks across the 

PET Scanning 76 

Iterative expectation-maximization algorithms are now the preferred method of reconstruction. The advantage is a 
better noise profile and resistance to the streak artifacts common with FBP, but the disadvantage is higher computer 
resource requirements. 

Attenuation correction: As different LORs must traverse different thicknesses of tissue, the photons are attenuated 
differentially. The result is that structures deep in the body are reconstructed as having falsely low tracer uptake. 
Contemporary scanners can estimate attenuation using integrated x-ray CT equipment, however earlier equipment 
offered a crude form of CT using a gamma ray (positron emitting) source and the PET detectors. 

While attenuation-corrected images are generally more faithful representations, the correction process is itself 
susceptible to significant artifacts. As a result, both corrected and uncorrected images are always reconstructed and 
read together. 

2D/3D reconstruction: Early PET scanners had only a single ring of detectors, hence the acquisition of data and 
subsequent reconstruction was restricted to a single transverse plane. More modern scanners now include multiple 
rings, essentially forming a cylinder of detectors. 

There are two approaches to reconstructing data from such a scanner: 1) treat each ring as a separate entity, so that 
only coincidences within a ring are detected, the image from each ring can then be reconstructed individually (2D 
reconstruction), or 2) allow coincidences to be detected between rings as well as within rings, then reconstruct the 
entire volume together (3D). 

3D techniques have better sensitivity (because more coincidences are detected and used) and therefore less noise, but 
are more sensitive to the effects of scatter and random coincidences, as well as requiring correspondingly greater 
computer resources. The advent of sub-nanosecond timing resolution detectors affords better random coincidence 
rejection, thus favoring 3D image reconstruction. 

PET Scanning 



PET is both a medical and research tool. It is used heavily in clinical 
oncology (medical imaging of tumors and the search for metastases), 
and for clinical diagnosis of certain diffuse brain diseases such as those 
causing various types of dementias. PET is also an important research 
tool to map normal human brain and heart function. 

PET is also used in pre-clinical studies using animals, where it allows 
repeated investigations into the same subjects. This is particularly 
valuable in cancer research, as it results in an increase in the statistical 
quality of the data (subjects can act as their own control) and 
substantially reduces the numbers of animals required for a given 

Alternative methods of scanning include x-ray computed tomography 
(CT), magnetic resonance imaging (MRI) and functional magnetic 
resonance imaging (fMRI), ultrasound and single photon emission 
computed tomography (SPECT). 

While some imaging scans such as CT and MRI isolate organic 
anatomic changes in the body, PET and SPECT are capable of 
detecting areas of molecular biology detail (even prior to anatomic 
change). PET scanning does this using radiolabelled molecular probes 
that have different rates of uptake depending on the type and function 
of tissue involved. Changing of regional blood flow in various 
anatomic structures (as a measure of the injected positron emitter) can 
be visualized and relatively quantified with a PET scan. 

Maximum intensity projection (MIP) of a F- 18 

FDG wholebody PET acquisition; liver 

metastases of a colorectal tumor are clearly 

visible within the abdominal region of the image. 

Normal physiological isotope uptake is seen in 

the brain, renal collection systems and bladder. In 

this animation, it is important to view the subject 

as rotating clockwise (note liver position). 

PET imaging is best performed using a dedicated PET scanner. However, it is possible to acquire PET images using 
a conventional dual-head gamma camera fitted with a coincidence detector. The quality of gamma-camera PET is 
considerably lower, and acquisition is slower. However, for institutions with low demand for PET, this may allow 
on-site imaging, instead of referring patients to another center, or relying on a visit by a mobile scanner. 

PET is a valuable technique for some diseases and disorders, because it is possible to target the radio-chemicals used 
for particular bodily functions. 

1. Oncology: PET scanning with the tracer fluorine- 18 (F-18) fluorodeoxyglucose (FDG), called FDG-PET, is 
widely used in clinical oncology. This tracer is a glucose analog that is taken up by glucose-using cells and 
phosphorylated by hexokinase (whose mitochondrial form is greatly elevated in rapidly growing malignant 
tumours). A typical dose of FDG used in an oncological scan is 200-400 MBq for an adult human. Because the 
oxygen atom which is replaced by F-18 to generate FDG is required for the next step in glucose metabolism in all 
cells, no further reactions occur in FDG. Furthermore, most tissues (with the notable exception of liver and 
kidneys) cannot remove the phosphate added by hexokinase. This means that FDG is trapped in any cell which 
takes it up, until it decays, since phosphorylated sugars, due to their ionic charge, cannot exit from the cell. This 
results in intense radiolabeling of tissues with high glucose uptake, such as the brain, the liver, and most cancers. 
As a result, FDG-PET can be used for diagnosis, staging, and monitoring treatment of cancers, particularly in 
Hodgkin's lymphoma, non-Hodgkin lymphoma, and lung cancer. Many other types of solid tumors will be found 
to be very highly labeled on a case-by-case basis — a fact which becomes especially useful in searching for tumor 
metastasis, or for recurrence after a known highly active primary tumor is removed. Because individual PET 
scans are more expensive than "conventional" imaging with computed tomography (CT) and magnetic resonance 
imaging (MRI), expansion of FDG-PET in cost-constrained health services will depend on proper health 

PET Scanning 


technology assessment; this problem is a difficult one because structural and functional imaging often cannot be 
directly compared, as they provide different information. Oncology scans using FDG make up over 90% of all 
PET scans in current practice. 

Neurology: PET neuroimaging is based on an assumption that 
areas of high radioactivity are associated with brain activity. What 
is actually measured indirectly is the flow of blood to different 
parts of the brain, which is generally believed to be correlated, and 
has been measured using the tracer oxygen-15. However, because 
of its 2-minute half-life 0-15 must be piped directly from a 
medical cyclotron for such uses, and this is difficult. In practice, 
since the brain is normally a rapid user of glucose, and since brain 
pathologies such as Alzheimer's disease greatly decrease brain 
metabolism of both glucose and oxygen in tandem, standard 
FDG-PET of the brain, which measures regional glucose use, may 
also be successfully used to differentiate Alzheimer's disease from 
other dementing processes, and also to make early diagnosis of 
Alzheimer's disease. The advantage of FDG-PET for these uses is 
its much wider availability. PET imaging with FDG can also be 
used for localization of seizure focus: A seizure focus will appear 

as hypometabolic during an interictal scan. Several radiotracers (i.e. radioligands) have been developed for PET that 

11 1 x 

are ligands for specific neuroreceptor subtypes such as [ C] raclopride and [ F] fallypride for dopamine D2/D3 

receptors, [ C]McN 5652 and [ C]DASB for serotonin transporters, or enzyme substrates (e.g. 6-FDOPA for the 

AADC enzyme). These agents permit the visualization of neuroreceptor pools in the context of a plurality of 

neuropsychiatric and neurologic illnesses. A novel probe developed at the University of Pittsburgh termed PIB 

(Pittsburgh compound B) permits the visualization of amyloid plaques in the brains of Alzheimer's patients. This 

technology could assist clinicians in making a positive clinical diagnosis of AD pre-mortem and aid in the 

development of novel anti-amyloid therapies. [ C]PMP (N-[ C]methylpiperidin-4-yl propionate) is a novel 

radiopharmaceutical used in PET imaging to determine the activity of the acetylcholinergic neurotransmitter system 

by acting as a substrate for acetylcholinesterase. Post-mortem examination of AD patients have shown decreased 

levels of acetylcholinesterase. [ C]PMP is used to map the acetylcholinesterase activity in the brain which could 

allow for pre-mortem diagnosis of AD and help to monitor AD treatments. Avid Radiopharmaceuticals of 

Philadelphia has developed a compound called 18F-AV-45 that uses the longer-lasting radionuclide fluorine- 18 to 

PET scan of the human brain. 

detect amyloid plaques using PET scans 


Cardiology, atherosclerosis and vascular disease study: In clinical cardiology, FDG-PET can identify so-called 
"hibernating myocardium", but its cost-effectiveness in this role versus SPECT is unclear. Recently, a role has 
been suggested for FDG-PET imaging of atherosclerosis to detect patients at risk of stroke [17]. 
Neuropsychology / Cognitive neuroscience: To examine links between specific psychological processes or 
disorders and brain activity. 

Psychiatry: Numerous compounds that bind selectively to neuroreceptors of interest in biological psychiatry have 
been radiolabeled with C-l 1 or F-18. Radioligands that bind to dopamine receptors (D1,D2, reuptake transporter), 
serotonin receptors (5HT1A, 5HT2A, reuptake transporter) opioid receptors (mu) and other sites have been used 
successfully in studies with human subjects. Studies have been performed examining the state of these receptors 
in patients compared to healthy controls in schizophrenia, substance abuse, mood disorders and other psychiatric 

Pharmacology: In pre-clinical trials, it is possible to radiolabel a new drug and inject it into animals. Such scans 
are referred to as biodistribution studies. The uptake of the drug, the tissues in which it concentrates, and its 

PET Scanning 79 

eventual elimination, can be monitored far more quickly and cost effectively than the older technique of killing 
and dissecting the animals to discover the same information. Much more commonly, however, drug occupancy at 
a purported site of action can be inferred indirectly by competition studies between unlabeled drug and 
radiolabeled compounds known apriori to bind with specificity to the site. A single radioligand can be used this 
way to test many potential drug candidates for the same target. A related technique involves scanning with 
radioligands that compete with an endogenous (naturally occurring) substance at a given receptor to demonstrate 
that a drug causes the release of the natural substance. 

7. PET technology for small animal imaging: A miniature PET tomograph has been constructed that is small enough 


for a fully conscious and mobile rat to wear on its head while walking around. This RatCAP (Rat Conscious 
Animal PET) allows animals to be scanned without the confounding effects of anesthesia. PET scanners designed 
specifically for imaging rodents or small primates are marketed for academic and pharmaceutical research. 

8. Musculo-Skeletal Imaging: PET has been shown to be a feasible technique for studying skeletal muscles during 


exercises like walking. One of the main advantages of using PET is that it can also provide muscle activation 
data about deeper lying muscles such as the vastus intermedialis and the gluteus minimus, as compared to other 
muscle studying techniques like Electromyography, which can only be used on superficial muscles (i.e. directly 
under the skin). A clear disadvantage, however, is that PET provides no timing information about muscle 
activation, because it has to be measured after the exercise is completed. This is due to the time it takes for FDG 
to accumulate in the activated muscles. 

Pulse Shape Discrimination 

The pulse Shape Discrimination (PSD) is a technique used to define which pulse is related to each crystal. Different 
Techniques were introduced to discriminate between two-types of pulses according to its shape (indeed due to the 
decay time). 


PET scanning is non-invasive, but it does involve exposure to ionizing radiation. The total dose of radiation is not 
insignificant, usually around 5—7 mSv. However, in modern practice, a combined PET/CT scan is almost always 
performed, and for PET/CT scanning, the radiation exposure may be substantial - around 23-26 mSv (for a 70 kg 
person - dose is likely to be higher for higher body weights). When compared to the classification level for 
radiation workers in the UK, of 6 mSv it can be seen that PET scans need proper justification. This can also be 
compared to 2.2 mSv average annual background radiation in the UK, 0.02 mSv for a chest x-ray and 6.5 - 8 mSv for 

[21] [221 

a CT scan of the chest, according to the Chest Journal and ICRP. A policy change suggested by the IF ALP A 

member associations in year 1999 mentioned that an aircrew member is likely to receive a radiation dose of 4— 9 mSv 

per year. 

PET Scanning 80 

See also 

• Diffuse optical imaging 

• Hot cell (Equipment used to produce the radiopharmaceuticals used in PET) 

• Molecular Imaging 


[I] Ter-Pogossian, M.M.; M.E. Phelps, E.J. Hoffman, N.A. Mullani (1975). "A positron-emission transaxial tomograph for nuclear imaging 
(PET)" (http://www.osti. gov/energycitations/product.biblio.jsp?osti_id=4251398). Radiology 114 (1): 89—98. . 

[2] Phelps, M.E.; EJ. Hoffman, N.A. Mullani, M.M. Ter-Pogossian (March 1, 1975). "Application of annihilation coincidence detection to 

transaxial reconstruction tomography" (http://jnm.snmjournals.Org/cgi/content/abstract/16/3/210). Journal of Nuclear Medicine 16 (3): 

210-224. PMID 1113170.. 
[3] Sweet, W.H.; G.L. Brownell (1953). "Localization of brain tumors with positron emitters". Nucleonics 11: 40^-5. 
[4] A Vital Legacy: Biological and Environmental Research in the Atomic Age, U.S. Department of Energy, The Office of Biological and 

Environmental Research, September 1997, p 25-26 
[5] IDO, T., C-N. WAN, V. CASELLA, J.S. FOWLER, A.P. WOLF, M. REIVICH, and D.E. KUHL, "Labeled 2-deoxy-D-glucose analogs. 

-labeled 2-deoxy-2-fluoro-D-glucose, 2-deoxy-2-fluoro-D-mannose and C-14-2-deoxy-2-fluoro-D-glucose, The Journal of Labelled 

Compounds and Radiopharmaceuticals 1978; 14:175-182. 
[6] BROWNELL G.L., C.A. BURNHAM, B. HOOP JR., and D.E. BOHNING, "Quantitative dynamic studies using short-lived radioisotopes 

and positron detection in Proceedings of the Symposium on Dynamic Studies with Radioisotopes in Medicine, Rotterdam. August 31 - 

September 4, 1970. IAEA. Vienna. 1971. pp. 161-172. 
[7] ROBERTSON J.S., MARR R.B., ROSENBLUM M., RADEKA V., and YAMAMOTO Y.L., "32-Crystal positron transverse section 

detector, in Tomographic Imaging in Nuclear Medicine, Freedman GS, Editor. 1973, The Society of Nuclear Medicine: New York. pp. 

[8] CHO, Z. H., ERIKSSON L., and CHAN J.K., "A circular ring transverse axial positron camera in Reconstruction Tomography in Diagnostic 

Radiology and Nuclear Medicine, Ed. Ter-Pogossian MM., University Park Press: Baltimore, 1975. 
[9] Michael E. Phelps (2006). PET: physics, instrumentation, and scanners. Springer, pp. 8—10. 
[10] "PET Imaging" (http://www.medcyclopaedia.eom/library/topics/volume_i/p/pet_imaging.aspx). GE Healthcare. . 

[II] "Invitation to Cover: Advancements in "Time-of-Flight" Technology Make New PET/CT Scanner at Penn a First in the World" (http:// University of Pennsylvania. June 15, 2006. . Retrieved February 22, 

[12] "A Close Look Into the Brain" (http://www. php?index=l 172). Julich Research Centre. 29 April 2009. . 

Retrieved 2009-04-29. 
[13] Young H, Baum R, Cremerius U, et al. (1999). "Measurement of clinical and subclinical tumour response using [18F]-fluorodeoxyglucose 

and positron emission tomography: review and 1999 EORTC recommendations.". European Journal of Cancer 35 (13): 1773—1782. 

doi:10.1016/S0959-8049(99)00229-4. PMID 10673991. 
[14] Technology I July 2003: Trends in MRI I Medical Imaging ( 
[15] D. E. Kuhl, R. A. Koeppe, S. Minoshima, S. E. Snyder, E. P. Ficaro, N. L. Foster, K. A. Frey and M. R. Kilbourn (1999) In vivo mapping of 

cerebral acetylcholinesterase activity in aging and Alzheimer's disease (http://www.neurology.Org/cgi/content/abstract/52/4/691) 

Neurology ( 
[16] Kolata, Gina. "Promise Seen for Detection of Alzheimer's" (, The 

New York Times, June 23, 2010. Accessed June 23, 2010. 
[18] Rat Conscious Animal PET ( 

[19] Oi et al., FDG-PET imaging of lower extremity muscular activity during level walking, Journal of Orthopaedic Science 2003(8):55-61 
[20] G. Brix, U Lechel, G Glatting, SI Ziegler, W Munzing, SP Muller and T Beyer (2005) Radiation Exposure of Patients Undergoing 

Whole-Body Dual-Modality 18F-FDG PET/CT Examinations (http://jnm.snmjournals.Org/cgi/content/full/46/4/608) Journal of Nuclear 

Medicine ( 
[21] (http://209.85.229. 132/search?q=cache:0_m8_v251bEJ: www. asp?document=docs/ICRP_87_CT_s.pps+ 

chest+ct+dose+msv&cd=l&hl=en&ct=clnk&gl=uk&client=firefox-a), ICRP, 30/10/09. 
[22] (http://chestjournal.chestpubs.Org/content/133/5/1289.full), [Chest Journal], 30/10/09. 
[23] Air crew radiation exposure — An overview (, Susan Bailey, Nuclear News 

(a publication of American Nuclear Society), January 2000. 

PET Scanning 81 

Further reading 

• Bustamante E. and Pedersen P.L. (1977). "High aerobic glycolysis of rat hepatoma cells in culture: role of 
mitochondrial hexokinase.". Proceedings of the National Academy of Sciences USA 74 (9): 3735—3739. 
doi: 10. 1073/pnas.74.9.3735. 

• Dumit Joseph, Picturing Personhood: Brain Scans and Biomedical Identity, Princeton University Press, 2004 

• Herman, Gabor T. (2009). Fundamentals of Computerized Tomography: Image Reconstruction from Projections 
(2nd ed.). Springer. ISBN 978-1-85233-617-2.. 

• Klunk WE, Engler H, Nordberg A, Wang Y, Blomqvist G, Holt DP, Bergstrom M, Savitcheva I, Huang GF, 
Estrada S, Ausen B, Debnath ML, Barletta J, Price JC, Sandell J, Lopresti BJ, Wall A, Koivisto P, Antoni G, 
Mathis CA, and Langstrom B. (2004). "Imaging brain amyloid in Alzheimer's disease with Pittsburgh 
Compound-B". Annals of Neurology 55 (3): 306-319. doi:10.1002/ana.20009. PMID 14991808. 

External links 

• PET Images ( 
acr_pre=&filter_p=&acr_post=#top) Search MedPix(r) 

• Seeing is believing: In vivo functional real-time imaging of transplanted islets using positron emission 
tomography (PET)(a protocol) ( 
seeing_is_believing_in_vivo_fu_l.php), Nature Protocols, from Nature Medicine - 12, 1423 - 1428 (2006). 

• The nuclear medicine and molecular medicine podcast ( - Podcast 

• Positron Emission Particle Tracking ( (PEPT) - engineering 
analysis tool based on PET that is able to track single particles in 3D within mixing systems or fluidised beds. 
Developed at the University of Birmingham, UK. 

• CMS coverage of PET scans ( 

• PET-CT atlas Harvard Medical School ( 

Alzheimer's disease 


Alzheimer's disease 

Alzheimer's disease 

Classification and external resources 

Comparison of a normal aged brain (left) and an Alzheimer's patient's brain (right). Differential characteristics are pointed out. 




G30. [1] , F00. [2] 

331.0 [3] , 290.1 [4] 















Alzheimer's disease (AD) — also called Alzheimer disease, senile dementia of the Alzheimer type (SDAT), 
primary degenerative dementia of the Alzheimer's type (PDDAT), or Alzheimer's — is the most common form of 
dementia. This incurable, degenerative, and terminal disease was first described by German psychiatrist and 

neuropathologist Alois Alzheimer in 1906 and was named after him. Most often, it is diagnosed in people over 

65 years of age, although the less-prevalent early-onset Alzheimer's can occur much earlier. In 2006, there were 

26.6 million sufferers worldwide. Alzheimer's is predicted to affect 1 in 85 people globally by 2050 



Although the course of Alzheimer's disease is unique for every individual, there are many common symptoms. 
The earliest observable symptoms are often mistakenly thought to be 'age-related' concerns, or manifestations of 
stress. In the early stages, the most commonly recognised symptom is inability to acquire new memories, such as 
difficulty in recalling recently observed facts. When AD is suspected, the diagnosis is usually confirmed with 
behavioural assessments and cognitive tests, often followed by a brain scan if available. 

As the disease advances, symptoms include confusion, irritability and aggression, mood swings, language 
breakdown, long-term memory loss, and the general withdrawal of the sufferer as their senses decline. 


Gradually, bodily functions are lost, ultimately leading to death. Individual prognosis is difficult to assess, as the 
duration of the disease varies. AD develops for an indeterminate period of time before becoming fully apparent, and 
it can progress undiagnosed for years. The mean life expectancy following diagnosis is approximately seven 
years. Fewer than three percent of individuals live more than fourteen years after diagnosis. 

The cause and progression of Alzheimer's disease are not well understood. Research indicates that the disease is 


associated with plaques and tangles in the brain. Currently used treatments offer a small symptomatic benefit; no 
treatments to delay or halt the progression of the disease are as yet available. As of 2008, more than 500 clinical 
trials have been conducted for identification of a possible treatment for AD, but it is unknown if any of the tested 

Alzheimer's disease 83 

intervention strategies will show promising results. A number of non-invasive, life-style habits have been 

suggested for the prevention of Alzheimer's disease, but there is a lack of adequate evidence for a link between these 

recommendations and reduced degeneration. Mental stimulation, exercise, and a balanced diet are suggested, as both 

a possible prevention and a sensible way of managing the disease. 

Because AD cannot be cured and is degenerative, management of patients is essential. The role of the main caregiver 
is often taken by the spouse or a close relative. Alzheimer's disease is known for placing a great burden on 
caregivers; the pressures can be wide-ranging, involving social, psychological, physical, and economic elements of 

r~9S1 ro/; - ] rO'71 T9S1 T9Q1 

the caregiver's life. In developed countries, AD is one of the most costly diseases to society. 


The disease course is divided into four stages, with progressive patterns of cognitive and functional impairments. 


The first symptoms are often mistaken as related to aging or stress. Detailed neuropsychological testing can 
reveal mild cognitive difficulties up to eight years before a person fulfills the clinical criteria for diagnosis of AD. 
These early symptoms can affect the most complex daily living activities. The most noticeable deficit is memory 
loss, which shows up as difficulty in remembering recently learned facts and inability to acquire new information. 


Subtle problems with the executive functions of attentiveness, planning, flexibility, and abstract thinking, or 
impairments in semantic memory (memory of meanings, and concept relationships), can also be symptomatic of the 

early stages of AD. Apathy can be observed at this stage, and remains the most persistent neuropsychiatric 

symptom throughout the course of the disease. The preclinical stage of the disease has also been termed mild 

cognitive impairment, but whether this term corresponds to a different diagnostic stage or identifies the first step 

of AD is a matter of dispute. 


In people with AD the increasing impairment of learning and memory eventually leads to a definitive diagnosis. In a 
small portion of them, difficulties with language, executive functions, perception (agnosia), or execution of 
movements (apraxia) are more prominent than memory problems. AD does not affect all memory capacities 
equally. Older memories of the person's life (episodic memory), facts learned (semantic memory), and implicit 
memory (the memory of the body on how to do things, such as using a fork to eat) are affected to a lesser degree 
than new facts or memories. 

Language problems are mainly characterised by a shrinking vocabulary and decreased word fluency, which lead to a 

[qci r3si 
general impoverishment of oral and written language. In this stage, the person with Alzheimer's is usually 

T351 T3S1 T391 

capable of adequately communicating basic ideas. While performing fine motor tasks such as writing, 

drawing or dressing, certain movement coordination and planning difficulties (apraxia) may be present but they are 

commonly unnoticed. As the disease progresses, people with AD can often continue to perform many tasks 

independently, but may need assistance or supervision with the most cognitively demanding activities. 


Progressive deterioration eventually hinders independence; with subjects being unable to perform most common 

activities of daily living. Speech difficulties become evident due to an inability to recall vocabulary, which leads 

[35] [39] 

to frequent incorrect word substitutions (paraphasias). Reading and writing skills are also progressively lost. 

Complex motor sequences become less coordinated as time passes and AD progresses, so the risk of falling 
[35] [35] 

increases. During this phase, memory problems worsen, and the person may fail to recognise close relatives. 

Alzheimer's disease 84 

Long-term memory, which was previously intact, becomes impaired. 

Behavioural and neuropsychiatric changes become more prevalent. Common manifestations are wandering, 
irritability and labile affect, leading to crying, outbursts of unpremeditated aggression, or resistance to caregiving. 

Sundowning can also appear. Approximately 30% of patients develop illusionary misidentifications and other 

T351 1351 

delusional symptoms. Subjects also lose insight of their disease process and limitations (anosognosia). Urinary 

incontinence can develop. These symptoms create stress for relatives and caretakers, which can be reduced by 

moving the person from home care to other long-term care facilities. 


During this last stage of AD, the patient is completely dependent upon caregivers. Language is reduced to simple 

[35] [39] 

phrases or even single words, eventually leading to complete loss of speech. Despite the loss of verbal 

language abilities, patients can often understand and return emotional signals. Although aggressiveness can still 

be present, extreme apathy and exhaustion are much more common results. Patients will ultimately not be able to 

perform even the most simple tasks without assistance. Muscle mass and mobility deteriorate to the point where 

they are bedridden, and they lose the ability to feed themselves. AD is a terminal illness with the cause of death 

typically being an external factor such as infection of pressure ulcers or pneumonia, not the disease itself. 


Several competing hypotheses exist trying to explain the cause of the disease. 

The oldest, on which most currently available drug therapies are based, is the 

cholinergic hypothesis, which proposes that AD is caused by reduced 

synthesis of the neurotransmitter acetylcholine. The cholinergic hypothesis has 

not maintained widespread support, largely because medications intended to treat 

acetylcholine deficiency have not been very effective. Other cholinergic effects 

have also been proposed, for example, initiation of large-scale aggregation of 

amyloid, leading to generalised neuroinflammation. 

In 1991, the amyloid hypothesis postulated that amyloid beta (Ap) deposits are 

the fundamental cause of the disease. Support for this postulate comes Microscopy image of a 

from the location of the gene for the amyloid beta precursor protein (APP) on neurofibrillary tangle, conformed by 

chromosome 21, together with the fact that people with trisomy 21 (Down yperp osp oryae au P roem 

Syndrome) who have an extra gene copy almost universally exhibit AD by 

40 years of age. Also APOE4, the major genetic risk factor for AD, leads to excess amyloid buildup in the 

brain before AD symptoms arise. Thus, Ap deposition precedes clinical AD. Further evidence comes from the 

finding that transgenic mice that express a mutant form of the human APP gene develop fibrillar amyloid plaques 

and Alzheimer's-like brain pathology with spatial learning deficits. 

An experimental vaccine was found to clear the amyloid plaques in early human trials, but it did not have any 
significant effect on dementia. Researchers have been led to suspect non-plaque Ap oligomers (aggregates of 
many monomers) as the primary pathogenic form of Ap. These toxic oligomers, also referred to as amyloid-derived 

diffusible ligands (ADDLs), bind to a surface receptor on neurons and change the structure of the synapse, thereby 

disrupting neuronal communication. One receptor for Ap oligomers may be the prion protein, the same protein 

that has been linked to mad cow disease and the related human condition, Creutzfeldt- Jakob disease, thus potentially 

linking the underlying mechanism of these neurodegenerative disorders with that of Alzheimer's disease. 

In 2009, this theory was updated, suggesting that a close relative of the beta-amyloid protein, and not necessarily the 
beta-amyloid itself, may be a major culprit in the disease. The theory holds that an amyloid-related mechanism that 
prunes neuronal connections in the brain in the fast-growth phase of early life may be triggered by aging-related 

Alzheimer's disease 85 

processes in later life to cause the neuronal withering of Alzheimer's disease. N-APP, a fragment of APP from the 

peptide's N-terminus, is adjacent to beta-amyloid and is cleaved from APP by one of the same enzymes. N-APP 

triggers the self-destruct pathway by binding to a neuronal receptor called death receptor 6 (DR6, also known as 

TNFRSF21). DR6 is highly expressed in the human brain regions most affected by Alzheimer's, so it is possible 

that the N-APP/DR6 pathway might be hijacked in the aging brain to cause damage. In this model, beta-amyloid 

plays a complementary role, by depressing synaptic function. 

A 2004 study found that deposition of amyloid plaques does not correlate well with neuron loss. This observation 
supports the tau hypothesis, the idea that tau protein abnormalities initiate the disease cascade. In this model, 
hyperphosphorylated tau begins to pair with other threads of tau. Eventually, they form neurofibrillary tangles inside 
nerve cell bodies. When this occurs, the microtubules disintegrate, collapsing the neuron's transport system. 
This may result first in malfunctions in biochemical communication between neurons and later in the death of the 


cells. Herpes simplex virus type 1 has also been proposed to play a causative role in people carrying the 

susceptible versions of the apoE gene. 

Another hypothesis asserts that the disease may be caused by age-related myelin breakdown in the brain. 
Demyelination leads to axonal transport disruptions, leading to loss of neurons that become stale. Iron released 
during myelin breakdown is hypothesized to cause further damage. Homeostatic myelin repair processes contribute 
to the development of proteinaceous deposits such as amyloid-beta and tau. 

Oxidative stress is a significant cause in the formation of the pathology. 

AD individuals show 70% loss of locus coeruleus cells that provide norepinephrine (in addition to its 
neurotransmitter role) that locally diffuses from "varicosities" as an endogenous antiinflammatory agent in the 
microenvironment around the neurons, glial cells, and blood vessels in the neocortex and hippocampus. It has 
been shown that norepinephrine stimulates mouse microglia to suppress Ap-induced production of cytokines and 
their phagocytosis of A|3. This suggests that degeneration of the locus ceruleus might be responsible for increased 
Ap deposition in AD brains. 


Alzheimer's disease i: 

the cerebral cortex and certain subcortical regions. This loss results in •'*• '- ^S; 

Alzheimer's disease is characterised by loss of neurons and synapses in •'' *■'*%'» a T ' ' •>-■' 

gross atrophy of the affected regions, including degeneration in the t j ;■»■'*,■** . ,..-. . "'• . »V -. • ° 

temporal lobe and parietal lobe, and parts of the frontal cortex and . » > «~" <;;fr i\* *-^ >\*'^T' 

[44] -,. -*■?), . '-'**''* f ^~~^~ ■■'-* z i'££ 

cingulate gyrus. Studies using MRI and PET have documented XV^-'Y ; -^ ^-A «,» ' 'V -'*v 

reductions in the size of specific brain regions in patients as they *,-. J "-~ • '; ■ »» f«? /,. ^^r' 

progressed from mild cognitive impairment to Alzheimer's disease, and 'f* 'ye. ■ 'A' -%. . »* | * 5 °''"" 

in comparison with similar images from healthy older adults. Histopathologic image of senile plaques seen in 

the cerebral cortex of a person with Alzheimer's 

Both amyloid plaques and neurofibrillary tangles are clearly visible by ,. „ 

- 1 r i jo j j disease of presenile onset. Silver impregnation. 

microscopy in brains of those afflicted by AD. Plaques are dense, 

mostly insoluble deposits of amyloid-beta peptide and cellular material outside and around neurons. Tangles 
(neurofibrillary tangles) are aggregates of the microtubule-associated protein tau which has become 
hyperphosphorylated and accumulate inside the cells themselves. Although many older individuals develop some 
plaques and tangles as a consequence of aging, the brains of AD patients have a greater number of them in specific 
brain regions such as the temporal lobe. Lewy bodies are not rare in AD patient's brains. 

Alzheimer's disease 



Alzheimer's disease has been identified as a protein misfolding disease 

(proteopathy), caused by accumulation of abnormally folded A-beta 

and tau proteins in the brain. Plaques are made up of small peptides, 

39—43 amino acids in length, called beta-amyloid (also written as 

A-beta or Ap). Beta-amyloid is a fragment from a larger protein called 

amyloid precursor protein (APP), a transmembrane protein that 

penetrates through the neuron's membrane. APP is critical to neuron 

growth, survival and post-injury repair. In Alzheimer's disease, 

an unknown process causes APP to be divided into smaller fragments by enzymes through proteolysis. L ' 1J One of 

these fragments gives rise to fibrils of beta-amyloid, which form clumps that deposit outside neurons in dense 

formations known as senile plaques 

Enzymes act on the APP (amyloid precursor 

protein) and cut it into fragments. The 

beta-amyloid fragment is crucial in the formation 

of senile plaques in AD. 


[21] [72] 

AD is also considered a tauopathy due to abnormal aggregation of the 
tau protein. Every neuron has a cytoskeleton, an internal support 
structure partly made up of structures called microtubules. These 
microtubules act like tracks, guiding nutrients and molecules from the 
body of the cell to the ends of the axon and back. A protein called tau 
stabilizes the microtubules when phosphorylated, and is therefore 
called a microtubule-associated protein. In AD, tau undergoes 
chemical changes, becoming hyperphosphorylated; it then begins to 
pair with other threads, creating neurofibrillary tangles and 
disintegrating the neuron's transport system 


In Alzheimer's disease, changes in tau protein 

lead to the disintegration of microtubules in brain 


Disease mechanism 

Exactly how disturbances of production and aggregation of the beta amyloid peptide gives rise to the pathology of 


AD is not known. The amyloid hypothesis traditionally points to the accumulation of beta amyloid peptides as the 
central event triggering neuron degeneration. Accumulation of aggregated amyloid fibrils, which are believed to be 

the toxic form of the protein responsible for disrupting the cell's calcium ion homeostasis, induces programmed cell 

death (apoptosis). It is also known that Ap selectively builds up in the mitochondria in the cells of 

Alzheimer's-affected brains, and it also inhibits certain enzyme functions and the utilisation of glucose by 


Various inflammatory processes and cytokines may also have a role in the pathology of Alzheimer's disease. 
Inflammation is a general marker of tissue damage in any disease, and may be either secondary to tissue damage in 
AD or a marker of an immunological response 


Alterations in the distribution of different neurotrophic factors and in the expression of their receptors such as the 

T7R1 T7Q1 

brain derived neurotrophic factor (BDNF) have been described in AD. 

Alzheimer's disease 87 


The vast majority of cases of Alzheimer's disease are sporadic, meaning that they are not genetically inherited 
although some genes may act as risk factors. On the other hand, around 0.1% of the cases are familial forms of 
autosomal-dominant inheritance, which usually have an onset before age 65. 

Most of autosomal dominant familial AD can be attributed to mutations in one of three genes: amyloid precursor 

roi ] 

protein (APP) and presenilins 1 and 2. Most mutations in the APP and presenilin genes increase the production of 


a small protein called A|342, which is the main component of senile plaques. Some of the mutations merely alter 
the size ratio between A|342 and the other major forms — e.g., A|340 — without increasing A|342 levels. This 

suggests that presenilin mutations can cause disease even if they lower the total amount of A|3 produced and may 
point to other roles of presenilin or a role for alterations in the function of APP and/or its fragments other than Ap. 

Most cases of Alzheimer's disease do not exhibit autosomal-dominant inheritance and are termed sporadic AD. 
Nevertheless genetic differences may act as risk factors. The best known genetic risk factor is the inheritance of the 
e4 allele of the apolipoprotein E (APOE). Between 40 and 80% of patients with AD possess at least one 


apoE4 allele. The APOE4 allele increases the risk of the disease by three times in heterozygotes and by 15 times 
in homozygotes. Geneticists agree that numerous other genes also act as risk factors or have protective effects 

ro i ] 

that influence the development of late onset Alzheimer's disease. Over 400 genes have been tested for association 
with late-onset sporadic AD, most with null results. 


Alzheimer's disease is usually diagnosed clinically from the patient history, 
collateral history from relatives, and clinical observations, based on the presence 
of characteristic neurological and neuropsychological features and the absence of 
alternative conditions. Advanced medical imaging with computed 

tomography (CT) or magnetic resonance imaging (MRI), and with single photon 
emission computed tomography (SPECT) or positron emission tomography 
(PET) can be used to help exclude other cerebral pathology or subtypes of 


dementia. Moreover, it may predict conversion from prodromal stages (mild 
cognitive impairment) to Alzheimer's disease. 

PET scan of the brain of a person 
with AD showing a loss of function 

Assessment of intellectual functioning including memory testing can further 

characterise the state of the disease. Medical organisations have created in the temporal lobe 

diagnostic criteria to ease and standardise the diagnostic process for practicing 

physicians. The diagnosis can be confirmed with very high accuracy post-mortem when brain material is available 
and can be examined histologically. 

Diagnostic criteria 

The National Institute of Neurological and Communicative Disorders and Stroke (NINCDS) and the Alzheimer's 
Disease and Related Disorders Association (ADRDA, now known as the Alzheimer's Association) established the 

most commonly used NINCDS-ADRDA Alzheimer's Criteria for diagnosis in 1984, extensively updated in 

2007. These criteria require that the presence of cognitive impairment, and a suspected dementia syndrome, be 

confirmed by neuropsychological testing for a clinical diagnosis of possible or probable AD. A histopathologic 

confirmation including a microscopic examination of brain tissue is required for a definitive diagnosis. Good 

statistical reliability and validity have been shown between the diagnostic criteria and definitive histopathological 

confirmation. Eight cognitive domains are most commonly impaired in AD — memory, language, perceptual 

skills, attention, constructive abilities, orientation, problem solving and functional abilities. These domains are 

equivalent to the NINCDS-ADRDA Alzheimer's Criteria as listed in the Diagnostic and Statistical Manual of Mental 

Alzheimer's disease 

[93] [94] 
Disorders (DSM-IV-TR) published by the American Psychiatric Association. 

Diagnostic tools 

Neuropsychological tests such as the mini-mental state examination 
(MMSE), are widely used to evaluate the cognitive impairments 
needed for diagnosis. More comprehensive test arrays are necessary for 
high reliability of results, particularly in the earliest stages of the 
disease. Neurological examination in early AD will usually 

provide normal results, except for obvious cognitive impairment, 
which may not differ from that resulting from other diseases processes, 
including other causes of dementia. 

Neuropsychological screening tests can help in 

Further neurological examinations are crucial in the differential the diagnosis of AD. In them patients have to 

,. . ,..„ , , ,. ri51 T • -i^-i copy drawings similar to the one shown in the 

diagnosis of AD and other diseases. Interviews with family 

picture, remember words, read, and subtract serial 

members are also utilised in the assessment of the disease. Caregivers numbers 

can supply important information on the daily living abilities, as well 

as on the decrease, over time, of the person's mental function. A caregiver's viewpoint is particularly important, 


since a person with AD is commonly unaware of his own deficits. Many times, families also have difficulties in 
the detection of initial dementia symptoms and may not communicate accurate information to a physician. 

Another recent objective marker of the disease is the analysis of cerebrospinal fluid for amyloid beta or tau 
proteins, both total tau protein and phosphorylated tau protein concentrations. Searching for these 

proteins using a spinal tap can predict the onset of Alzheimer's with a sensitivity of between 94% and 100%. 
When used in conjunction with existing neuroimaging techniques, doctors can identify patients with significant 
memory loss who are already developing the disease. Spinal fluid tests are commercially available, unlike the 
latest neuroimaging technology. Alzheimer's was diagnosed in one-third of the people who did not have any 
symptoms in a 2010 study, meaning that disease progression occurs well before symptoms occur. 

Supplemental testing provides extra information on some features of the disease or is used to rule out other 
diagnoses. Blood tests can identify other causes for dementia than AD — causes which may, in rare cases, be 
reversible. It is common to perform thyroid function tests, assess B12, rule out syphillis, rule out metabolic 
problems (including tests for kidney function, electrolyte levels and for diabetes), assess levels of heavy metals (e.g. 
lead, mercury) and anemia. (See differential diagnosis for Dementia). (It is also necessary to rule out delirium). 

Psychological tests for depression are employed, since depression can either be concurrent with AD (see Depression 
of Alzheimer disease), an early sign of cognitive impairment, or even the cause. 

Diagnostic imaging 

When available as a diagnostic tool, single photon emission computed tomography (SPECT) and positron emission 
tomography (PET) neuroimaging are used to confirm a diagnosis of Alzheimer's in conjunction with evaluations 
involving mental status examination. In a person already having dementia, SPECT appears to be superior in 
differentiating Alzheimer's disease from other possible causes, compared with the usual attempts employing mental 
testing and medical history analysis. Advances have led to the proposal of new diagnostic criteria. 

A new technique known as PiB PET has been developed for directly and clearly imaging beta-amyloid deposits in 
vivo using a tracer that binds selectively to the A-beta deposits. The PiB-PET compound uses carbon-11 PET 
scanning. Recent studies suggest that PiB-PET is 86% accurate in predicting which people with mild cognitive 
impairment will develop Alzheimer's disease within two years, and 92% accurate in ruling out the likelihood of 
developing Alzheimer's. 

Alzheimer's disease 89 

A similar PET scanning radiopharmaceutical compound called 

1 Q 10 

(E)-4-(2-(6-(2-(2-(2-([ F]-fluoroethoxy)ethoxy)ethoxy)pyridin-3-yl)vinyl)-N-methyl benzenamine, or F AV-45, 
or florbetapir-fluorine-18, or simply florbetapir, contains the longer-lasting radionuclide fluorine-18, has recently 
been created, and tested as a possible diagnostic tool in Alzheimer's patients. Florbetapir, like PiB, 

binds to beta-amyloid, but due to its use of fluorine- 18 has a half-life of 1 10 minutes, in contrast to PiB's radioactive 
half life of 20 minutes. Wong et al. found that the longer life allowed the tracer to accumulate significantly more in 
the brains of the AD patients, particularly in the regions known to be associated with beta-amyloid deposits. 

One review predicted that amyloid imaging is likely to be used in conjunction with other markers rather than as an 

Volumetric MRI can detect changes in the size of brain regions. Measuring those regions that atrophy during the 
progress of Alzheimer's disease is showing promise as a diagnostic indicator. It may prove less expensive than other 
imaging methods currently under study. 

Recent studies suggest that brain metabolite levels may be utilized as biomarkers for Alzheimer's disease. 


At present, there is no definitive evidence to support that any particular 
measure is effective in preventing AD. Global studies of measures 
to prevent or delay the onset of AD have often produced inconsistent 
results. However, epidemiological studies have proposed relationships 
between certain modifiable factors, such as diet, cardiovascular risk, 
pharmaceutical products, or intellectual activities among others, and a 
population's likelihood of developing AD. Only further research, 
including clinical trials, will reveal whether these factors can help to 
prevent AD. [119] 

Intellectual activities such as playing chess or 

Although cardiovascular risk factors, such as hypercholesterolemia, , .... ... , ,. , , . 

° > j r ' regular social interaction have been linked to a 

hypertension, diabetes, and smoking, are associated with a higher risk reduced risk of AD in epidemiological studies, 

of onset and course of AD, [12 ° ] [121] Statins, which are cholesterol although no causal relationship has been found. 

lowering drugs, have not been effective in preventing or improving the 

11221 ri231 
course of the disease. The components of a Mediterranean diet, which include fruit and vegetables, bread, 

wheat and other cereals, olive oil, fish, and red wine, may all individually or together reduce the risk and course of 

ri24] M241 

Alzheimer's disease. Its beneficial cardiovascular effect has been proposed as the mechanism of action. 
There is limited evidence that light to moderate use of alcohol, particularly red wine, is associated with lower risk of 

AD. [125] 

Reviews on the use of vitamins have not found enough evidence of efficacy to recommend vitamin C, E, 

n27i n?si 

or folic acid with or without vitamin B , as preventive or treatment agents in AD. Additionally vitamin E 

is associated with important health risks. Trials examining folic acid (B9) and other B vitamins failed to show 

any significant association with cognitive decline. 

Long-term usage of non-steroidal anti-inflammatory drug (NSAIDs) is associated with a reduced likelihood of 
developing AD. Human postmortem studies, in animal models, or in vitro investigations also support the notion 
that NSAIDs can reduce inflammation related to amyloid plaques. However trials investigating their use as 
palliative treatment have failed to show positive results while no prevention trial has been completed. " Curcumin 
from the curry spice turmeric has shown some effectiveness in preventing brain damage in mouse models due to its 

[1311 ri321 

anti-inflammatory properties. Hormone replacement therapy, although previously used, is no longer thought 

to prevent dementia and in some cases may even be related to it. There is inconsistent and unconvincing 

evidence that ginkgo has any positive effect on cognitive impairment and dementia, " and a recent study concludes 

Alzheimer's disease 


that it has no effect in reducing the rate of AD incidence. A 21 -year study found that coffee drinkers of 3—5 cups 

per day at midlife had a 65% reduction in risk of dementia in late-life 


People who engage in intellectual activities such as reading, playing board games, completing crossword puzzles, 


playing musical instruments, or regular social interaction show a reduced risk for Alzheimer's disease. " This is 
compatible with the cognitive reserve theory, which states that some life experiences result in more efficient neural 


functioning providing the individual a cognitive reserve that delays the onset of dementia manifestations. 

Education delays the onset of AD syndrome, but is not related to earlier death after diagnosis 

ri 39i 

also associated with a reduced risk of AD. 


Physical activity is 

Some studies have shown an increased risk of developing AD with environmental factors such the intake of metals, 
particularly aluminium, or exposure to solvents. The quality of some of these studies has been 

criticised, and other studies have concluded that there is no relationship between these environmental factors and 

[144] [145] [146] [147] 

[143] . 

the development of AD. 1 

While some studies suggest that Extremely low frequency electromagnetic fields may increase the risk for 
Alzheimer's disease, reviewers found that further epidemiological and laboratory investigations of this hypothesis are 

rj48] [1491 

needed. Smoking is a significant AD risk factor. Systemic markers of the innate immune system are risk 
factors for late-onset AD. 


There is no cure for Alzheimer's disease; available treatments offer relatively small symptomatic benefit but remain 
palliative in nature. Current treatments can be divided into pharmaceutical, psychosocial and caregiving. 


Four medications are currently approved by regulatory agencies such 
as the U.S. Food and Drug Administration (FDA) and the European 
Medicines Agency (EMA) to treat the cognitive manifestations of AD: 
three are acetylcholinesterase inhibitors and the other is memantine, an 
NMDA receptor antagonist. No drug has an indication for delaying or 
halting the progression of the disease. 

Reduction in the activity of the cholinergic neurons is a well-known 
feature of Alzheimer's disease. Acetylcholinesterase inhibitors are 
employed to reduce the rate at which acetylcholine (ACh) is broken 
down, thereby increasing the concentration of ACh in the brain and 
combating the loss of ACh caused by the death of cholinergic 
neurons. As of 2008, the cholinesterase inhibitors approved for the 
management of AD symptoms are donepezil (brand name Aricepf), 
galantamine (Razadyne), and rivastigmine (branded as Exelon 
and Exelon Patch ). There is evidence for the efficacy of these 
medications in mild to moderate Alzheimer's disease, and 

some evidence for their use in the advanced stage. Only donepezil is 
approved for treatment of advanced AD dementia. The use of these 
drugs in mild cognitive impairment has not shown any effect in a delay 
of the onset of AD. The most common side effects are nausea and 
vomiting, both of which are linked to cholinergic excess. These side 

Three-dimensional molecular model of donepezil, 

an acetylcholinesterase inhibitor used in the 

treatment of AD symptoms 

Molecular structure of memantine, a 

medication approved for advanced 

AD symptoms 

Alzheimer's disease 


effects arise in approximately 10—20% of users and are mild to moderate in severity. Less common secondary effects 
include muscle cramps, decreased heart rate (bradycardia), decreased appetite and weight, and increased gastric acid 

Glutamate is a useful excitatory neurotransmitter of the nervous system, although excessive amounts in the brain can 
lead to cell death through a process called excitotoxicity which consists of the overstimulation of glutamate 
receptors. Excitotoxicity occurs not only in Alzheimer's disease, but also in other neurological diseases such as 
Parkinson's disease and multiple sclerosis. Memantine (brand names Akatinol, Axura, EbixalAbixa, Memox and 
Namenda), is a noncompetitive NMDA receptor antagonist first used as an anti-influenza agent. It acts on the 
glutamatergic system by blocking NMDA receptors and inhibiting their overstimulation by glutamate. 
Memantine has been shown to be moderately efficacious in the treatment of moderate to severe Alzheimer's disease. 
Its effects in the initial stages of AD are unknown. Reported adverse events with memantine are infrequent and 
mild, including hallucinations, confusion, dizziness, headache and fatigue. The combination of memantine and 
donepezil has been shown to be "of statistically significant but clinically marginal effectiveness". 

Antipsychotic drugs are modestly useful in reducing aggression and psychosis in Alzheimer's patients with 
behavioural problems, but are associated with serious adverse effects, such as cerebrovascular events, movement 
difficulties or cognitive decline, that do not permit their routine use. When used in the long-term, they have 

been shown to associate with increased mortality. 

Medical marijuana appears to be effective in delaying Alzheimer's Disease. The active ingredient in marijuana, THC, 
prevents the formation of deposits in the brain associated with Alzheimer's disease. THC was found to inhibit 
acetylcholinesterase more effectively than commercially marketed drugs. THC was also found to delay 

, ■ [169] [170] 


Psychosocial intervention 

Psychosocial interventions are used as an adjunct to pharmaceutical treatment 
and can be classified within behaviour-, emotion-, cognition- or 
stimulation-oriented approaches. Research on efficacy is unavailable and rarely 
specific to AD, focusing instead on dementia in general. 

Behavioural interventions attempt to identify and reduce the antecedents and 
consequences of problem behaviours. This approach has not shown success in 


improving overall functioning, but can help to reduce some specific problem 
behaviours, such as incontinence. There is a lack of high quality data on the 
effectiveness of these techniques in other behaviour problems such as 

Emotion-oriented interventions include reminiscence therapy, validation therapy, 

supportive psychotherapy, sensory integration, also called snoezelen, and 

simulated presence therapy. Supportive psychotherapy has received little or no 

formal scientific study, but some clinicians find it useful in helping mildly 

impaired patients adjust to their illness. Reminiscence therapy (RT) involves 

the discussion of past experiences individually or in group, many times with the 

aid of photographs, household items, music and sound recordings, or other 

familiar items from the past. Although there are few quality studies on the effectiveness of RT, it may be beneficial 

for cognition and mood. Simulated presence therapy (SPT) is based on attachment theories and involves playing 

a recording with voices of the closest relatives of the person with Alzheimer's disease. There is partial evidence 


indicating that SPT may reduce challenging behaviours. Finally, validation therapy is based on acceptance of the 

A specifically designed room for 
sensory integration therapy, also 

called snoezelen; an 

emotion-oriented psychosocial 

intervention for people with 


Alzheimer's disease 92 

reality and personal truth of another's experience, while sensory integration is based on exercises aimed to stimulate 
senses. There is little evidence to support the usefulness of these therapies. 

The aim of cognition-oriented treatments, which include reality orientation and cognitive retraining, is the reduction 
of cognitive deficits. Reality orientation consists in the presentation of information about time, place or person in 
order to ease the understanding of the person about its surroundings and his or her place in them. On the other hand 
cognitive retraining tries to improve impaired capacities by exercitation of mental abilities. Both have shown some 
efficacy improving cognitive capacities, although in some studies these effects were transient and negative 

effects, such as frustration, have also been reported. 

Stimulation-oriented treatments include art, music and pet therapies, exercise, and any other kind of recreational 
activities. Stimulation has modest support for improving behaviour, mood, and, to a lesser extent, function. 
Nevertheless, as important as these effects are, the main support for the use of stimulation therapies is the change in 
the person s routine. 


Since Alzheimer's has no cure and it gradually renders people incapable of tending for their own needs, caregiving 
essentially is the treatment and must be carefully managed over the course of the disease. 

During the early and moderate stages, modifications to the living environment and lifestyle can increase patient 
safety and reduce caretaker burden. Examples of such modifications are the adherence to simplified 

routines, the placing of safety locks, the labelling of household items to cue the person with the disease or the use of 
modified daily life objects. The patient may also become incapable of feeding themselves, so they 

require food in smaller pieces or pureed. When swallowing difficulties arise, the use of feeding tubes may be 
required. In such cases, the medical efficacy and ethics of continuing feeding is an important consideration of the 
caregivers and family members. The use of physical restraints is rarely indicated in any stage of the disease, 

although there are situations when they are necessary to prevent harm to the person with AD or their caregivers. 

As the disease progresses, different medical issues can appear, such as oral and dental disease, pressure ulcers, 
malnutrition, hygiene problems, or respiratory, skin, or eye infections. Careful management can prevent them, while 
professional treatment is needed when they do arise. During the final stages of the disease, treatment is 

centred on relieving discomfort until death. 

A small recent study in the US concluded that patients whose caregivers had a realistic understanding of the 
prognosis and clinical complications of late dementia were less likely to receive aggressive treatment near the end of 
hfe. [192] 


The early stages of Alzheimer's disease are difficult to diagnose. A 
definitive diagnosis is usually made once cognitive impairment 
compromises daily living activities, although the person may still be 
living independently. The symptoms will progress from mild cognitive 
problems, such as memory loss through increasing stages of cognitive 
and non-cognitive disturbances, eliminating any possibility of 

Disability-adjusted life year for Alzheimer and 
independent living. " other dementias per 100,000 inhabitants in 

2004. no data < 50 50-70 70-90 
90-110 110-130 130-150 150-170 

The mean life expectancy following diagnosis is approximately 170-190 190-210 210-230 230-250 

Life expectancy of the population with the disease is reduced. 

The mean life expectancy following diagnosis is approximately 
seven years. Fewer than 3% of patients live more than fourteen >250 

Alzheimer's disease 93 

years. Disease features significantly associated with reduced survival are an increased severity of cognitive 
impairment, decreased functional level, history of falls, and disturbances in the neurological examination. Other 
coincident diseases such as heart problems, diabetes or history of alcohol abuse are also related with shortened 
survival. While the earlier the age at onset the higher the total survival years, life expectancy is 


particularly reduced when compared to the healthy population among those who are younger. Men have a less 
favourable survival prognosis than women. 

The disease is the underlying cause of death in 70% of all cases. Pneumonia and dehydration are the most 


frequent immediate causes of death, while cancer is a less frequent cause of death than in the general population. 



Incidence rates 
after age 65 [198] 


New affected 
per thousand 













Two main measures are used in epidemiological studies: incidence and prevalence. Incidence is the number of new 
cases per unit of person— time at risk (usually number of new cases per thousand person— years); while prevalence is 
the total number of cases of the disease in the population at any given time. 

Regarding incidence, cohort longitudinal studies (studies where a disease-free population is followed over the years) 

ri98i ri99i 

provide rates between 10 and 15 per thousand person— years for all dementias and 5—8 for AD, which means 

that half of new dementia cases each year are AD. Advancing age is a primary risk factor for the disease and 
incidence rates are not equal for all ages: every five years after the age of 65, the risk of acquiring the disease 

ri98i ri99i 

approximately doubles, increasing from 3 to as much as 69 per thousand person years. There are also sex 

differences in the incidence rates, women having a higher risk of developing AD particularly in the population older 
than85. [199][200] 

Prevalence of AD in populations is dependent upon different factors including incidence and survival. Since the 
incidence of AD increases with age, it is particularly important to include the mean age of the population of interest. 
In the United States, Alzheimer prevalence was estimated to be 1.6% in 2000 both overall and in the 65—74 age 
group, with the rate increasing to 19% in the 75—84 group and to 42% in the greater than 84 group. Prevalence 


rates in less developed regions are lower. The World Health Organization estimated that in 2005, 0.379% of 
people worldwide had dementia, and that the prevalence would increase to 0.441% in 2015 and to 0.556% in 

[2031 r2021 

2030. Other studies have reached similar conclusions. Another study estimated that in 2006, 0.40% of the 
world population (range 0.17—0.89%; absolute number 26.6 million, range 11.4—59.4 million) were afflicted by AD, 
and that the prevalence rate would triple and the absolute number would quadruple by 2050. 

Alzheimer's disease 



The ancient Greek and Roman philosophers and physicians associated 
old age with increasing dementia. It was not until 1901 that German 
psychiatrist Alois Alzheimer identified the first case of what became 
known as Alzheimer's disease in a fifty-year-old woman he called 
Auguste D. Alzheimer followed her until she died in 1906, when he 


first reported the case publicly. During the next five years, eleven 
similar cases were reported in the medical literature, some of them 
already using the term Alzheimer's disease. The disease was first 
described as a distinctive disease by Emil Kraepelin after suppressing 
some of the clinical (delusions and hallucinations) and pathological 
features (arteriosclerotic changes) contained in the original report of 
Auguste D. He included Alzheimer's disease, also named presenile 
dementia by Kraepelin, as a subtype of senile dementia in the eighth 

edition of his Textbook of Psychiatry, published in 1910 


Alois Alzheimer's patient Auguste Deter in 1902. 

Hers was the first described case of what became 

known as Alzheimer's disease. 

For most of the 20th century, the diagnosis of Alzheimer's disease was 

reserved for individuals between the ages of 45 and 65 who developed symptoms of dementia. The terminology 
changed after 1977 when a conference on AD concluded that the clinical and pathological manifestations of 
presenile and senile dementia were almost identical, although the authors also added that this did not rule out the 


possibility that they had different causes. This eventually led to the diagnosis of Alzheimer's disease 


independently of age. The term senile dementia of the Alzheimer type (SDAT) was used for a time to describe 
the condition in those over 65, with classical Alzheimer's disease being used for those younger. Eventually, the term 
Alzheimer's disease was formally adopted in medical nomenclature to describe individuals of all ages with a 


characteristic common symptom pattern, disease course, and neuropathology. 

Society and culture 

Social costs 

Dementia, and specifically Alzheimer's disease, may be among the most costly diseases for society in Europe and the 

[9Q] [9Q1 T2101 T2111 

United States, while their cost in other countries such as Argentina, or South Korea, is also high and 

rising. These costs will probably increase with the ageing of society, becoming an important social problem. 

AD-associated costs include direct medical costs such as nursing home care, direct nonmedical costs such as 

in-home day care, and indirect costs such as lost productivity of both patient and caregiver. Numbers vary 


between studies but dementia costs worldwide have been calculated around $160 billion, while costs of 

Alzheimer in the United States may be $100 billion each year 


The greatest origin of costs for society is the long-term care by health care professionals and particularly 


institutionalisation, which corresponds to 2/3 of the total costs for society. The cost of living at home is also very 
high, especially when informal costs for the family, such as caregiving time and caregiver's lost earnings, are 
taken into account. 

Costs increase with dementia severity and the presence of behavioural disturbances, and are related to the 

increased caregiving time required for the provision of physical care. Therefore any treatment that slows 

cognitive decline, delays institutionalisation or reduces caregivers' hours will have economic benefits. Economic 

evaluations of current treatments have shown positive results 


Alzheimer's disease 95 

Caregiving burden 

The role of the main caregiver is often taken by the spouse or a close relative. Alzheimer's disease is known for 

placing a great burden on caregivers which includes social, psychological, physical or economic aspects. 

Home care is usually preferred by patients and families. This option also delays or eliminates the need for more 

professional and costly levels of care. Nevertheless two-thirds of nursing home residents have 



Dementia caregivers are subject to high rates of physical and mental disorders. Factors associated with greater 
psychosocial problems of the primary caregivers include having an affected person at home, the carer being a 
spouse, demanding behaviours of the cared person such as depression, behavioural disturbances, hallucinations, 

nio] [2191 

sleep problems or walking disruptions and social isolation. Regarding economic problems, family 

caregivers often give up time from work to spend 47 hours per week on average with the person with AD, while the 
costs of caring for them are high. Direct and indirect costs of caring for an Alzheimer's patient average between 
$18,000 and $77,500 per year in the United States, depending on the study. [213] [220] 

Cognitive behavioural therapy and the teaching of coping strategies either individually or in group have 

[251 [2211 

demonstrated their efficacy in improving caregivers' psychological health. 

Notable cases 

As Alzheimer's disease is highly prevalent, many notable people have 
developed it. Well-known examples are former United States President 
Ronald Reagan and Irish writer Iris Murdoch, both of whom were the 
subjects of scientific articles examining how their cognitive capacities 
deteriorated with the disease. Other cases include the 

retired footballer Ferenc Puskas, the former Prime Ministers 

Harold Wilson (United Kingdom) and Adolfo Suarez (Spain), 

[2281 [2291 

the actress Rita Hayworth, the actor Charlton Heston, the 

novelist Terry Pratchett, [230] and the 2009 Nobel Prize in Physics Chall, " n lleMon alK > Ronakl Reayan '" a ,nee,,n ^ 

in the White House. Both of them would later 
develop Alzheimer's disease. 

recipient Charles K. Kao. 


AD has also been portrayed in films such as: Iris (2001), " based on 

[2331 [234] 

John Bayley's memoir of his wife Iris Murdoch; " The Notebook (2004), based on Nicholas Sparks' 1996 
novel of the same name; A Moment to Remember (2004) ;Thanmathra (2005); Memories of Tomorrow 

[2371 [23R1 

(Ashita no Kioku) (2006), based on Hiroshi Ogiwara's novel of the same name; Away from Her (2006), 
based on Alice Munro's short story "The Bear Came over the Mountain". Documentaries on Alzheimer's disease 
include Malcolm and Barbara: A Love Story (1999) and Malcolm and Barbara: Love's Farewell (2007), both 
featuring Malcolm Pointon. 

Research directions 

As of 2008, the safety and efficacy of more than 400 pharmaceutical treatments are being investigated in clinical 
trials worldwide, and approximately a quarter of these compounds are in Phase III trials; the last step prior to review 
by regulatory agencies. 

One area of clinical research is focused on treating the underlying disease pathology. Reduction of amyloid beta 


levels is a common target of compounds (such as apomorphine) under investigation. Immunotherapy or 

vaccination for the amyloid protein is one treatment modality under study. Unlike preventative vaccination, the 
putative therapy would be used to treat people already diagnosed. It is based upon the concept of training the 
immune system to recognise, attack, and reverse deposition of amyloid, thereby altering the course of the 

[244] [2451 [2461 

disease. An example of such a vaccine under investigation was ACC-001, although the trials were 

Alzheimer's disease 

suspended in 2008. Another similar agent is bapineuzumab, an antibody designed as identical to the naturally 

[248] [2491 

induced anti-amyloid antibody. Other approaches are neuroprotective agents, such as AL-108, and 

metal-protein interaction attenuation agents, such as PBT2. A TNFa receptor fusion protein, etanercept has 

showed encouraging results. 

In 2008, two separate clinical trials showed positive results in modifying the course of disease in mild to moderate 

[252] T2531 

AD with methylthioninium chloride (trade name rember), a drug that inhibits tau aggregation, and dimebon, 


an antihistamine. The consecutive Phase-Ill trial of Dimebon failed to show positive effects in the primary and 
secondary endpoints. 

The possibility that AD could be treated with antiviral medication is suggested by a study showing colocation of 
herpes simplex virus with amyloid plaques. 

Preliminary research on the effects of meditation on retrieving memory and cognitive functions have been 


encouraging. Limitations of this research can be addressed in future studies with more detailed analyses. 

See also 

• Art and dementia 


[3] 1.0 

[4] 1 






[10] http://www.ncbi. 

[II] Berchtold NC, Cotman CW (1998). "Evolution in the conceptualization of dementia and Alzheimer's disease: Greco-Roman period to the 
1960s". Neurobiol. Aging 19 (3): 173-89. doi:10.1016/S0197-4580(98)00052-9. PMID 9661992. 

[12] Brookmeyer R., Gray S., Kawas C. (September 1998). "Projections of Alzheimer's disease in the United States and the public health impact 
of delaying disease onset" ( American Journal of 
Public Health 88 (9): 1337^12. doi:10.2105/AJPH.88.9.1337. PMID 9736873. PMC 1509089. 

[13] 2006 prevalence estimate: 

• Brookmeyer, R; Johnson, E; Ziegler-Graham, K; Arrighi, HM (July 2007). "Forecasting the global burden of Alzheimer's disease" (http:// 
works. cgi?article=1022&context=rbrookmeyer). Alzheimer's and Dementia 3 (3): 186—91. 
doi:10.1016/j.jalz.2007 .04.381. PMID 19595937. . Retrieved 2008-06-18. 

• (PDF) World population prospects: the 2006 revision, highlights ( 
WPP2006_Highlights_rev.pdf). Working Paper No. ESA/P/WP.202. Population Division, Department of Economic and Social Affairs, 
United Nations. 2007. . Retrieved 2008-08-27. 

[14] "What is Alzheimer's disease?" ( 

August 2007. . Retrieved 2008-02-21. 
[15] Waldemar G, Dubois B, Emre M, et al. (January 2007). "Recommendations for the diagnosis and management of Alzheimer's disease and 

other disorders associated with dementia: EFNS guideline". Eur J Neurol 14(1): el-26. doi: 10.1 111/j. 1468-133 1.2006.01605.x. 

PMID 17222085. 
[16] "Alzheimer's diagnosis of AD" ( Alzheimer's Research Trust. . Retrieved 

[17] Tabert MH, Liu X, Doty RL, Serby M, Zamora D, Pelton GH, Marder K, Albers MW, Stern Y, Devanand DP (2005). "A 10-item smell 

identification scale related to risk for Alzheimer's disease". Ann. Neurol. 58 (1): 155-160. doi: 10.1002/ana.20533. PMID 15984022. 
[18] "Understanding stages and symptoms of Alzheimer's disease" ( National 

Institute on Aging. 2007-10-26. . Retrieved 2008-02-21. 
[19] Molsa PK, Marttila RJ, Rinne UK (August 1986). "Survival and cause of death in Alzheimer's disease and multi-infarct dementia". Acta 

Neurol Scand 74 (2): 103-7. doi:10.1111/j.l600-0404.1986.tb04634.x. PMID 3776457. 

Alzheimer's disease 97 

[20] Molsa PK, Marttila RJ, Rinne UK (March 1995). "Long-term survival and predictors of mortality in Alzheimer's disease and multi-infarct 

dementia". ActaNeurol Scant! 91 (3): 159-64. PMID 7793228. 
[21] Tiraboschi P, Hansen LA, Thai LJ, Corey-Bloom J (June 2004). "The importance of neuritic plaques and tangles to the development and 

evolution of AD". Neurology 62 (11): 1984-9. PMID 15184601. 
[22] "Alzheimer's Disease Clinical Trials" ( US National Institutes of Health. . 

Retrieved 2008-08-18. 
[23] "Can Alzheimer's disease be prevented" (http://www.nia.nih.gOv/NR/rdonlyres/63B5A29C-F943-4DB7-91B4-0296772973F3/0/ 

CanADbePrevented.pdf) (pdf). National Institute on Aging. 2006-08-29. . Retrieved 2008-02-29. 
[24] "The MetLife study of Alzheimer's disease: The caregiving experience" (http://web.archive.Org/web/20080625071754/http://www. (PDF). MetLife Mature Market Institute. 

August 2006. Archived from the original on 2008-06-25. . Retrieved 2008-02-12. 
[25] Thompson CA, Spilsbury K, Hall J, Birks Y, Barnes C, Adamson J (2007). "Systematic review of information and support interventions for 

caregivers of people with dementia" ( BMC 

Geriatrl: 18. doi:10.1 186/1471-2318-7-18. PMID 17662119. PMC 1951962. 
[26] Schneider J, Murray J, Banerjee S, Mann A (August 1999). "EUROCARE: a cross-national study of co-resident spouse carers for people 

with Alzheimer's disease: I — Factors associated with carer burden". International Journal of Geriatric Psychiatry 14 (8): 651—661. 

doi:10.1002/(SICI)1099-1166(199908)14:8<651::AID-GPS992>3.0.CO;2-B. PMID 10489656. 
[27] Murray J, Schneider J, Banerjee S, Mann A (August 1999). "EUROCARE: a cross-national study of co-resident spouse carers for people 

with Alzheimer's disease: II — A qualitative analysis of the experience of caregiving". International Journal of Geriatric Psychiatry 14 (8): 

662-667. doi:10.1002/(SICI)1099-1166(199908)14:8<662::AID-GPS993>3.0.CO;2-4. PMID 10489657. 
[28] Bonin-Guillaume S, Zekry D, Giacobini E, Gold G, Michel JP (January 2005). "Impact economique de la demence (English: The 

economical impact of dementia)" (in French). Presse Med 34 (1): 35-41. ISSN 0755-4982. PMID 15685097. 
[29] Meek PD, McKeithan K, Schumock GT (1998). "Economic considerations in Alzheimer's disease". Pharmacotherapy 18 (2 Pt 2): 68—73; 

discussion 79-82. PMID 9543467. 
[30] Backman L, Jones S, Berger AK, Laukka EJ, Small BJ (Sep 2004). "Multiple cognitive deficits during the transition to Alzheimer's disease". 

J Intern Med 256 (3): 195-204. doi:10.1111/j.l365-2796.2004.01386.x. PMID 15324363. 
[31] Nygard L (2003). "Instrumental activities of daily living: a stepping-stone towards Alzheimer's disease diagnosis in subjects with mild 

cognitive impairment?". Acta Neurol Scand Suppl (179): 42-6. doi:10.1034/j.l600-0404.107.sl79.8.x. PMID 12603250. 
[32] Arnaiz E, Almkvist O (2003). "Neuropsychological features of mild cognitive impairment and preclinical Alzheimer's disease". Acta Neurol. 

Scand, Suppl. 179: 34-41. doi:10.1034/j.l600-0404.107.sl79.7.x. PMID 12603249. 
[33] Landes AM, Sperry SD, Strauss ME, Geldmacher DS (Dec 2001). "Apathy in Alzheimer's disease". J Am Geriatr Soc 49 (12): 1700-7. 

doi:10.1046/j. 1532- 5415.2001.49282.x. PMID 11844006. 
[34] Petersen RC (February 2007). "The current status of mild cognitive impairment — what do we tell our patients?". Nat Clin Pract Neurol 3 

(2): 60-1. doi:10.1038/ncpneuro0402. PMID 17279076. 
[35] Forstl H, Kurz A (1999). "Clinical features of Alzheimer's disease". European Archives of Psychiatry and Clinical Neuroscience 249 (6): 

288-290. doi:10.1007/s004060050101. PMID 10653284. 
[36] Carlesimo GA, Oscar-Berman M (June 1992). "Memory deficits in Alzheimer's patients: a comprehensive review". Neuropsychol Rev 3 (2): 

119-69. doi:10.1007/BF01108841. PMID 1300219. 
[37] Jelicic M, Bonebakker AE, Bonke B (1995). "Implicit memory performance of patients with Alzheimer's disease: a brief review". 

International Psychogeriatrics 7 (3): 385-392. doi:10.1017/S1041610295002134. PMID 8821346. 
[38] Taler V, Phillips NA (Jul 2008). "Language performance in Alzheimer's disease and mild cognitive impairment: a comparative review". J 

Clin Exp Neuropsychol 30 (5): 501-56. doi: 10. 1080/13803390701550128. PMID 1856925. 
[39] Frank EM (September 1994). "Effect of Alzheimer's disease on communication function". JSC Med Assoc 90 (9): 417-23. PMID 7967534. 
[40] Volicer L, Harper DG, Manning BC, Goldstein R, Satlin A (May 2001). "Sundowning and circadian rhythms in Alzheimer's disease" (http:/ 

/*>'c/j!a/o' 158(5): 704-11. doi: 10.1 176/appi.ajp. 158.5.704. 

PMID 11329390. . Retrieved 2008-08-27. 
[41] Gold DP, Reis MF, Markiewicz D, Andres D (January 1995). "When home caregiving ends: a longitudinal study of outcomes for caregivers 

of relatives with dementia". J Am Geriatr Soc 43 (1): 10-6. PMID 7806732. 
[42] Francis PT, Palmer AM, Snape M, Wilcock GK (February 1999). "The cholinergic hypothesis of Alzheimer's disease: a review of progress" 

( J. Neurol. Neurosurg. Psychiatr. 66 (2): 

137-47. doi:10.1136/jnnp.66.2.137. PMID 10071091. PMC 1736202. 
[43] ShenZX (2004). "Brain cholinesterases: II. The molecular and cellular basis of Alzheimer's disease". Med Hypotheses 63 (2): 308—21. 

doi: 10.1016/j.mehy .2004.02.031. PMID 15236795. 
[44] Wenk GL (2003). "Neuropathologic changes in Alzheimer's disease". J Clin Psychiatry 64 Suppl 9: 7-10. PMID 12934968. 
[45] Hardy J, Allsop D (October 1991). "Amyloid deposition as the central event in the aetiology of Alzheimer's disease". Trends Pharmacol. 

Sci. 12 (10): 383-88. doi:10.1016/0165-6147(91)90609-V. PMID 1763432. 
[46] Mudher A, Lovestone S (January 2002). "Alzheimer's disease-do tauists and baptists finally shake hands?". Trends Neurosci. 25 (1): 22—26. 

doi:10.1016/S0166-2236(00)02031-2. PMID 11801334. 

Alzheimer's disease 

[47] Nistor M, Don M, Parekh M, et al. (October 2007). "Alpha- and beta-secretase activity as a function of age and beta-amyloid in Down 
syndrome and normal brain". Neurobiol Aging 28 (10): 1493-1506. doi:10.1016/j.neurobiolaging.2006.06.023. PMID 16904243. 

[48] Lott IT, Head E (March 2005). "Alzheimer disease and Down syndrome: factors in pathogenesis". Neurobiol Aging 26 (3): 383—89. 
doi:10.1016/j.neurobiolaging.2004.08.005. PMID 15639317. 

[49] Polvikoski T, Sulkava R, Haltia M, et al. (November 1995). "Apolipoprotein E, dementia, and cortical deposition of beta-amyloid protein". 
N Engl] Med 333 (19): 1242^7. doi:10.1056/NEJM19951 1093331902. PMID 7566000. 

[50] Transgenic mice: 

• Games D, Adams D, Alessandrini R, et al. (February 1995). "Alzheimer-type neuropathology in transgenic mice overexpressing V717F 
beta-amyloid precursor protein". Nature 373 (6514): 523-27. doi: 10.1038/373523a0. PMID 7845465. 

• Masliah E, Sisk A, Mallory M, Mucke L, Schenk D, Games D (September 1996). "Comparison of neurodegenerative pathology in 
transgenic mice overexpressing V717F beta-amyloid precursor protein and Alzheimer's disease". J Neurosci 16 (18): 5795—811. 
PMID 8795633. 

• Hsiao K, Chapman P, Nilsen S, et al. (October 1996). "Correlative memory deficits, Abeta elevation, and amyloid plaques in transgenic 
mice". Science (journal) 274 (5284): 99-102. doi:10.1126/science.274.5284.99. PMID 8810256. 

• Lalonde R, Dumont M, Staufenbiel M, Sturchler-Pierrat C, Strazielle C. (2002). "Spatial learning, exploration, anxiety, and motor 
coordination in female APP23 transgenic mice with the Swedish mutation.". Brain Research (journal) 956 (1): 36^4, year=2002. 
doi:10.1016/S0006-8993(02)03476-5. PMID 12426044. 

[51] Holmes C, Boche D, Wilkinson D, et al. (July 2008). "Long-term effects of Abeta42 immunisation in Alzheimer's disease: follow-up of a 

randomised, placebo-controlled phase I trial". Lancet ill (9634): 216-23. doi:10.1016/S0140-6736(08)61075-2. PMID 18640458. 
[52] Lacor PN,e( al. ; Buniel, MC; Furlow, PW; Clemente, AS; Velasco, PT; Wood, M; Viola, KL; Klein, WL (January 2007). "A6 

Oligomer- Induced Aberrations in Synapse Composition, Shape, and Density Provide a Molecular Basis for Loss of Connectivity in 

Alzheimer's Disease". Journal of Neuroscience 27 (4): 796-807. doi: 10.1523/JNEUROSCI.3501-06.2007. PMID 17251419. 
[53] Lauren J, Gimbel D, et al. (February 2009). "Cellular prion protein mediates impairment of synaptic plasticity by amyloid-beta oligomers" 

(http://www.pubmedcentral. ?tool=pmcentrez&artid=2748841). Nature 457 (7233): 1128—32. 

doi:10.1038/nature07761. PMID 19242475. PMC 2748841. 
[54] Nikolaev, Anatoly; Todd McLaughlin, Dennis O'Leary, Marc Tessier-Lavigne (19 February 2009). "N-APP binds DR6 to cause axon 

pruning and neuron death via distinct caspases" ( 

Nature 457 (7232): 981-989. doi:10.1038/nature07767. ISSN 0028-0836. PMID 19225519. PMC 2677572. 
[55] Schmitz C, Rutten BP, Pielen A, et al. (April 2004). "Hippocampal neuron loss exceeds amyloid plaque load in a transgenic mouse model of 

Alzheimer's disease" ( Am J Pathol 164 (4): 

1495-1502. PMID 15039236. PMC 1615337. 
[56] Goedert M, Spillantini MG, Crowther RA (July 1991). "Tau proteins and neurofibrillary degeneration". Brain Pathol 1 (4): 279—86. 

doi:10.1111/j.l750-3639.1991.tb00671.x. PMID 1669718. 
[57] Iqbal K, Alonso Adel C, Chen S, et al. (January 2005). "Tau pathology in Alzheimer disease and other tauopathies". Biochim Biophys Acta 

1739 (2-3): 198-210. doi:10.1016/j.bbadis.2004.09.008. PMID 15615638. 
[58] Chun W, Johnson GV (2007). "The role of tau phosphorylation and cleavage in neuronal cell death". Front Biosci 12: 733—56. 

doi: 10.2741/2097. PMID 17127334. 
[59] Itzhaki RF, Wozniak MA (May 2008). "Herpes simplex virus type 1 in Alzheimer's disease: the enemy within" (http://iospress.metapress. 

com/openurl.asp?genre=article&issn=1387-2877&volume=13&issue=4&spage=393). J Alzheimers Dis 13 (4): 393—405. 

ISSN 1387-2877. PMID 18487848. . 
[60] Bartzokis, G. (2009). "Alzheimer's disease as homeostatic responses to age-related myelin breakdown". Neurobiology of Aging. 

[61] Bartzokis, G; Lu, PH; Mintz, J (2004). "Quantifying age-related myelin breakdown with MRI: novel therapeutic targets for preventing 

cognitive decline and Alzheimer's disease". Journal of Alzheimer's disease : JAD 6 (6 Suppl): S53— 9. PMID 15665415. 
[62] Bartzokis, G.; Lu, P.; Mintz, J. (2007). "Human brain myelination and amyloid beta deposition in Alzheimer's disease". Alzheimer's and 

Dementia 3: 122. doi:10.1016/j.jalz.2007.01.019. 
[63] Su, B; Wang, X; Nunomura, A; Moreira, PI; Lee, HG; Perry, G; Smith, MA; Zhu, X (2008). "Oxidative stress signaling in Alzheimer's 

disease" ( Current Alzheimer research 5 (6): 

525-32. doi:10.2174/156720508786898451. PMID 19075578. PMC 2780015. 
[64] Heneka MT, Nadrigny F, Regen T, Martinez-Hernandez A, Dumitrescu-Ozimek L, Terwel D, Jardanhazi-Kurutz D, Walter J, Kirchhoff F, 

Hanisch UK, Kummer MP. (2010). Locus ceruleus controls Alzheimer's disease pathology by modulating microglial functions through 

norepinephrine, ( Proc Natl Acad Sci USA. 107:6058—6063 

doi:10.1073/pnas.0909586107 PMID 20231476 
[65] Moan R (July 20, 2009). "MRI software accurately IDs preclinical Alzheimer's disease" ( 

display/article/1 13619/1428344). Diagnostic Imaging. . 
[66] Bouras C, Hof PR, Giannakopoulos P, Michel JP, Morrison JH (1994). "Regional distribution of neurofibrillary tangles and senile plaques in 

the cerebral cortex of elderly patients: a quantitative evaluation of a one-year autopsy population from a geriatric hospital". Cereb. Cortex 4 

(2): 138-50. doi:10.1093/cercor/4.2.138. PMID 8038565. 

Alzheimer's disease 99 

[67] Kotzbauer PT, Trojanowsk JQ, Lee VM (Oct 2001). "Lewy body pathology in Alzheimer's disease". J Mol Neurosci 17 (2): 225—32. 

doi:10.1385/JMN:17:2:225. PMID 11816795. 
[68] Hashimoto M, Rockenstein E, Crews L, Masliah E (2003). "Role of protein aggregation in mitochondrial dysfunction and neurodegeneration 

in Alzheimer's and Parkinson's diseases". Neuromolecular Med. 4 (1-2): 21-36. doi:10.1385/NMM:4:l-2:21. PMID 14528050. 
[69] Priller C, Bauer T, Mitteregger G, Krebs B, Kretzschmar HA, Herms J (July 2006). "Synapse formation and function is modulated by the 

amyloid precursor protein". J. Neurosci. 26 (27): 7212-21. doi:10.1523/JNEUROSCI.1450-06.2006. PMID 16822978. 
[70] Turner PR, O'Connor K, Tate WP, Abraham WC (May 2003). "Roles of amyloid precursor protein and its fragments in regulating neural 

activity, plasticity and memory". Prog. Neurobiol. 70 (1): 1-32. doi:10.1016/S0301-0082(03)00089-3. PMID 12927332. 
[71] Hooper NM (April 2005). "Roles of proteolysis and lipid rafts in the processing of the amyloid precursor protein and prion protein". 

Biochem. Soc. Trans. 33 (Pt2): 335-8. doi:10.1042/BST0330335. PMID 15787600. 
[72] Ohnishi S, Takano K (March 2004). "Amyloid fibrils from the viewpoint of protein folding". Cell. Mol. Life Sci. 61 (5): 51 1-24. 

doi:10.1007/s00018-003-3264-8. PMID 15004691. 
[73] Hernandez F, Avila J (September 2007). "Tauopathies". Cell. Mol. Life Sci. 64 (17): 2219-33. doi:10.1007/s00018-007-7220-x. 

PMID 17604998. 
[74] Van Broeck B, Van Broeckhoven C, Kumar-Singh S (2007). "Current insights into molecular mechanisms of Alzheimer disease and their 

implications for therapeutic approaches". Neurodegener Dis 4 (5): 349-65. doi: 10. 1 159/000105 156. PMID 17622778. 
[75] Yankner BA, Duffy LK, Kirschner DA (October 1990). "Neurotrophic and neurotoxic effects of amyloid beta protein: reversal by 

tachykinin neuropeptides". Science (journal) 250 (4978): 279-82. doi:10.1126/science.2218531. PMID 2218531. 
[76] Chen X, Yan SD (December 2006). "Mitochondrial Abeta: a potential cause of metabolic dysfunction in Alzheimer's disease". IUBMB Life 

58 (12): 686-94. doi:10.1080/15216540601047767. PMID 17424907. 
[77] Greig NH, Mattson MP, Perry T, et al. (December 2004). "New therapeutic strategies and drug candidates for neurodegenerative diseases: 

p53 and TNF-alpha inhibitors, and GLP-1 receptor agonists". Ann. N. Y. Acad. Sci. 1035: 290-315. doi: 10.1 196/annals.l332.018. 

PMID 15681814. 
[78] Tapia-Arancibia L, Aliaga E, Silhol M, Arancibia S (Nov 2008). "New insights into brain BDNF function in normal aging and Alzheimer 

disease". Brain Research Reviews 59 (1): 201-20. doi:10.1016/j.brainresrev.2008.07.007. PMID 18708092. 
[79] Schindowski K, Belarbi K, Buee L (Feb 2008). "Neurotrophic factors in Alzheimer's disease: role of axonal transport" (http://www. Genes, Brain and Behavior 7 (Suppl 1): 43—56. 

doi:10.1111/j.l601-183X.2007.00378.x. PMID 18184369. PMC 2228393. 
[80] Blennow K, de Leon MJ, Zetterberg H (July 2006). "Alzheimer's disease". Lancet 368 (9533): 387-403. 

doi: 10.1016/S0140-6736(06)691 13-7. PMID 16876668. 
[81] Waring SC, Rosenberg RN (March 2008). "Genome-wide association studies in Alzheimer disease". Arch Neurol 65 (3): 329—34. 

doi:10.1001/archneur.65.3.329. PMID 18332245. 
[82] Selkoe DJ (June 1999). "Translating cell biology into therapeutic advances in Alzheimer's disease". Nature 399 (6738 Suppl): A23— 31. 

doi: 10.1038/19866. PMID 10392577. 
[83] Shioi J, Georgakopoulos A, Mehta P, et al. (May 2007). "FAD mutants unable to increase neurotoxic A{5 42 suggest that mutation effects on 

neurodegeneration may be independent of effects on Abeta.". J Neurochem. 101 (3): 674—81. doi: 10.1 11 1/j. 1471-4159. 2006. 04391.x. 

PMID 17254019. 
[84] Strittmatter WJ, Saunders AM, Schmechel D, et al. (March 1993). "Apolipoprotein E: high-avidity binding to beta-amyloid and increased 

frequency of type 4 allele in late-onset familial Alzheimer disease" ( 

fcgi?tool=pmcentrez&artid=46003). Proc. Natl. Acad. Sci. USA 90 (5): 1977-81. doi:10.1073/pnas.90.5.1977. PMID 8446617. PMC 46003. 
[85] Mahley RW, Weisgraber KH, Huang Y (April 2006). "Apolipoprotein E4: a causative factor and therapeutic target in neuropathology, 

including Alzheimer's disease" ( Proc. Natl. 

Acad. Sci. U.S.A. 103 (15): 5644-51. doi:10.1073/pnas.0600549103. PMID 16567625. PMC 1414631. 
[86] Mendez MF (2006). "The accurate diagnosis of early-onset dementia". International Journal of Psychiatry Medicine 36 (4): 401—412. 

doi:10.2190/Q6J4-R143-P630-KW41. PMID 17407994. 
[87] Klafki HW, Staufenbiel M, Kornhuber J, Wiltfang J (November 2006). "Therapeutic approaches to Alzheimer's disease". Brain 129 (Pt 11): 

2840-55. doi:10.1093/brain/awl280. PMID 17018549. 
[88] "Dementia: Quick reference guide" ( (PDF). London: (UK) National 

Institute for Health and Clinical Excellence. November 2006. . Retrieved 2008-02-22. 
[89] Schroeter ML, Stein T, Maslowski N, Neumann J (2009). "Neural correlates of Alzheimer's disease and mild cognitive impairment — A 

meta-analysis including 1351 patients." ( 

Neurolmage 47 (4): 1196-1206. doi:10.1016/j.neuroimage.2009.05.037. PMID 19463961. PMC 2730171. 
[90] McKhann G, Drachman D, Folstein M, Katzman R, Price D, Stadlan EM (July 1984). "Clinical diagnosis of Alzheimer's disease: report of 

the NINCDS-ADRDA Work Group under the auspices of Department of Health and Human Services Task Force on Alzheimer's Disease". 

Neurology 34 (7): 939^4. PMID 6610841. 
[91] Dubois B, Feldman HH, Jacova C, et al. (August 2007). "Research criteria for the diagnosis of Alzheimer's disease: revising the 

NINCDS-ADRDA criteria". Lancet Neurol 6 (8): 734^6. doi:10.1016/S1474-4422(07)70178-3. PMID 17616482. 
[92] Blacker D, Albert MS, Bassett SS, Go RC, Harrell LE, Folstein MF (December 1994). "Reliability and validity of NINCDS-ADRDA 

criteria for Alzheimer's disease. The National Institute of Mental Health Genetics Initiative". Arch. Neurol. 51(12): 1 198—204. 

Alzheimer's disease 100 

PMID 7986174. 
[93] American Psychiatric Association (2000). Diagnostic and statistical manual of mental disorders: DSM-IV-TR (4th Edition Text Revision 

ed.). Washington, DC: American Psychiatric Association. ISBN 0890420254. 
[94] Ito N (May 1996). "[Clinical aspects of dementia]" (in Japanese). Hokkaido Igaku Zasshi 71 (3): 315-20. PMID 8752526. 
[95] Tombaugh TN, Mclntyre NJ (September 1992). "The mini-mental state examination: a comprehensive review". J Am Geriatr Soc 40 (9): 

922-35. PMID 1512391. 
[96] Pasquier F (January 1999). "Early diagnosis of dementia: neuropsychology". J. Neurol. 246 (1): 6-15. doi:10.1007/s004150050299. 

PMID 9987708. 
[97] Antoine C, Antoine P, Guermonprez P, Frigard B (2004). "[Awareness of deficits and anosognosia in Alzheimer's disease.]" (in French). 

Encephale 30 (6): 570-7. doi:10.1016/S0013-7006(04)95472-3. PMID 15738860. 
[98] Cruz VT, Pais J, Teixeira A, Nunes B (2004). "[The initial symptoms of Alzheimer disease: caregiver perception]" (in Portuguese). Acta 

Med Port 17 (6): 435^4. PMID 16197855. 
[99] Marksteiner J, Hinterhuber H, Humpel C (June 2007). "Cerebrospinal fluid biomarkers for diagnosis of Alzheimer's disease: 

beta-amyloid(l-42), tau, phospho-tau-181 and total protein". Drugs Today 43 (6): 423-31. doi:10.1358/dot.2007 .43.6. 1067341. 

PMID 17612711. 
[100] De Meyer G, Shapiro F, Vanderstichele H, Vanmechelen E, Engelborghs S, De Deyn PP, Coart E, Hansson O, Minthon L, Zetterberg H, 

Blennow K, Shaw L, Trojanowski JQ (August 2010). "Diagnosis-Independent Alzheimer Disease Biomarker Signature in Cognitively Normal 

Elderly People". Arch Neurol. 67 (8): 949-56. doi:10.1001/archneurol.2010.179. PMID 20697045. 
[101] Kolata G (August 9, 2010). "Spinal-Fluid Test Is Found to Predict Alzheimer's" ( 

research/lOspinal.html). The New York Times. . Retrieved August 10, 2010. 
[102] Roan S (August 9, 2010). "Tapping into an accurate diagnosis of Alzheimer's disease" ( 

aging/la-heb-alzheimers-20100809, 0,5683387. story). Los Angeles Times. . Retrieved August 10, 2010. 
[103] Clarfield AM (October 2003). "The decreasing prevalence of reversible dementias: an updated meta-analysis". Arch. Intern. Med. 163 (18): 

2219-29. doi:10.1001/archinte.l63. 18.2219. PMID 14557220. 
[104] Sun, X; Steffens, DC; Au, R; Folstein, M; Summergrad, P; Yee, J; Rosenberg, I; Mwamburi, DM et al. (2008). "Amyloid-Associated 

Depression: A Prodromal Depression of Alzheimer Disease?" (http://archpsyc.ama-assn.Org/cgi/content/short/65/5/542). Arch Gen 

Psychiatry 65 (5): 542-550. doi:10.1001/archpsyc.65.5.542. PMID 18458206. . 
[105] Geldmacher DS, Whitehouse PJ (May 1997). "Differential diagnosis of Alzheimer's disease". Neurology 48 (5 Suppl 6): S2— 9. 

PMID 9153154. 
[106] Potter GG, Steffens DC (May 2007). "Contribution of depression to cognitive impairment and dementia in older adults". Neurologist 13 

(3): 105-17. doi:10.1097/01.nrl.0000252947.15389.a9. PMID 17495754. 
[107] Bonte FJ, Harris TS, Hynan LS, Bigio EH, White CL (July 2006). "Tc-99m HMPAO SPECT in the differential diagnosis of the dementias 

with histopathologic confirmation". Clin Nucl Med 31 (7): 376-8. doi:10.1097/01.rlu.0000222736.81365.63. PMID 16785801. 
[108] Dougall NJ, Bruggink S, Ebmeier KP (2004). "Systematic review of the diagnostic accuracy of 99mTc-HMPAO-SPECT in dementia". Am 

J Geriatr Psychiatry 12 (6): 554-70. doi:10.1176/appi.ajgp.l2.6.554. PMID 15545324. 
[109] PiBPET: 

• Kemppainen NM, Aalto S, Karrasch M, et al. (January 2008). "Cognitive reserve hypothesis: Pittsburgh Compound B and 
fluorodeoxyglucose positron emission tomography in relation to education in mild Alzheimer's disease". Ann. Neurol. 63 (1): 1 12—8. 
doi:10.1002/ana.21212. PMID 18023012. 

• Ikonomovic MD, Klunk WE, Abrahamson EE, et al. (June 2008). "Post-mortem correlates of in vivo PiB-PET amyloid imaging in a 
typical case of Alzheimer's disease" ( Brain 
131 (Pt6): 1630-45. doi:10.1093/brain/awn016. PMID 18339640. PMC 2408940. 

• Jack CR, Lowe VJ, Senjem ML, et al. (March 2008). "11C PiB and structural MRI provide complementary information in imaging of 
Alzheimer's disease and amnestic mild cognitive impairment" ( 
fcgi?tool=pmcentrez&artid=2730157). Brain 131 (Pt 3): 665-80. doi:10.1093/brain/awm336. PMID 18263627. PMC 2730157. 

[110] Abella HA (June 16, 2009). "Report from SNM: PET imaging of brain chemistry bolsters characterization of dementias" (http://www. article/1 13619/1423022). Diagnostic Imaging. . 
[Ill] Carpenter AP Jr, Pontecorvo MJ, Hefti FF, Skovronsky DM (2009 Aug). "The use of the exploratory IND in the evaluation and 

development of F-PET radiopharmaceuticals for amyloid imaging in the brain: a review of one company's experience". Q J Nucl Med Mol 

Imaging 53 (4): 387-93. PMID 19834448. 
[112] Leung K (April 8, 2010). "(E)-4-(2-(6-(2-(2-(2-(' 8 F-fluoroethoxy)ethoxy)ethoxy)pyridin-3-yl)vinyl)-N-methyl benzenamine [[ 18 F]AV-45]" 

(http://www.ncbi. 1 8F). Molecular Imaging and Contrast Agent Database. . 

Retrieved 2010-06-24. 
[113] Kolata G (June 23, 2010). "Promise Seen for Detection of Alzheimer's" ( 

24scans.html). The New York Times. . Retrieved June 23, 2010. 
[114] Wong DF, Rosenberg PB, Zhou Y, Kumar A, Raymont V, Ravert HT, Dannals RF, Nandi A, Brasic JR, Ye W, Hilton J, Lyketsos C, Kung 

HF, Joshi AD, Skovronsky DM, Pontecorvo MJ (2010 Jun). "In vivo imaging of amyloid deposition in Alzheimer disease using the 

radioligand 18F-AV-45 (flobetapir F 18)". J Nucl Med 51 (6): 913-20. doi:10.2967/jnumed.l09.069088. PMID 20501908. Lay summary 


Alzheimer's disease 101 

[115] Rabinovici GD, Jagust WJ (2009). "Amyloid imaging in aging and dementia: testing the amyloid hypothesis in vivo" (http://www. Behav Neurol 21 (1): 117-28. doi:10.3233/BEN-2009-0232 

(inactive 2010-08-25). PMID 19847050. PMC 2804478. 
[116] O'Brien JT (2007 Dec). "Role of imaging techniques in the diagnosis of dementia". Br J Radiol 80 (Spec No 2): S71— 7. 

doi:10.1259/bjr/33117326. PMID 18445747. 
[117] Rupsingh R, Borrie M, Smith M, Wells JL, Bartha R (June 2009). "Reduced hippocampal glutamate in Alzheimer disease". Neurobiol 

Aging. doi:10.1016/j.neurobiolaging.2009.05.002. PMID 19501936. 
[118] Prevention recommendations not supported: 

• Kawas CH (2006). "Medications and diet: protective factors for AD?". Alzheimer Dis Assoc Disord 20 (3 Suppl 2): S89— 96. 
PMID 16917203. 

• Luchsinger JA, Mayeux R (2004). "Dietary factors and Alzheimer's disease". Lancet Neurol 3 (10): 579—87. 
doi:10.1016/S1474-4422(04)00878-6. PMID 15380154. 

• Luchsinger JA, Noble JM, Scarmeas N (2007). "Diet and Alzheimer's disease". Curr Neurol Neurosci Rep 7 (5): 366—72. 
doi:10.1007/sl 1910-007-0057-8. PMID 17764625. 

• National Institutes of Health (April 28, 2010). "Independent Panel Finds Insufficient Evidence to Support Preventive Measures for 
Alzheimer's Disease" ( Press release. . 

• Daviglus ML et al. (April 26—28, 2010). "NIH State-of-the-Science Conference: Preventing Alzheimer's Disease and Cognitive Decline" 
( . 

[119] Szekely CA, Breitner JC, Zandi PP (2007). "Prevention of Alzheimer's disease". Int Rev Psychiatry 19 (6): 693-706. 

doi: 10.1080/09540260701797944. PMID 18092245. 
[120] Patterson C, Feightner JW, Garcia A, Hsiung GY, MacKnight C, Sadovnick AD (February 2008). "Diagnosis and treatment of dementia: 1. 

Risk assessment and primary prevention of Alzheimer disease" ( 

artid=2244657). CMAJ 178 (5): 548-56. doi: 10.1503/cmaj.070796. PMID 18299540. PMC 2244657. 
[121] Rosendorff C, Beeri MS, Silverman JM (2007). "Cardiovascular risk factors for Alzheimer's disease". Am J Geriatr Cardiol 16 (3): 143—9. 

doi:10.1111/j.l076-7460.2007.06696.x. PMID 17483665. 
[122] Reiss AB, Wirkowski E (2007). "Role of HMG-CoA reductase inhibitors in neurological disorders: progress to date". Drugs 67 (15): 

2111-20. doi:10.2165/00003495-200767150-00001. PMID 17927279. 
[123] Kuller LH (August 2007). "Statins and dementia". Curr Atheroscler Rep 9 (2): 154-61. doi:10.1007/sll883-007-0012-9. PMID 17877925. 
[124] Solfrizzi V, Capurso C, D'Introno A, et al. (January 2008). "Lifestyle-related factors in predementia and dementia syndromes". Expert Rev 

Neurother 8 (1): 133-58. doi: 10.1586/14737175.8.1.133. PMID 18088206. 
[125] Panza F, Capurso C, D'Introno A, Colacicco AM, Frisardi V, Lorusso M, Santamato A, Seripa D, Pilotto A, Scafato E, Vendemiale G, 

Capurso A, Solfrizzi V. (May 2009). "Alcohol drinking, cognitive functions in older age, predementia, and dementia syndromes". J 

Alzheimers Dis 17 (1): 7-31. doi:10.3233/JAD-2009-1009 (inactive 2010-08-25). PMID 19494429. 
[126] Boothby LA, Doering PL (December 2005). "Vitamin C and vitamin E for Alzheimer's disease". Ann Pharmacother 39 (12): 2073—80. 

doi:10.1345/aph.lE495. PMID 16227450. 
[127] Isaac MG, Quinn R, Tabet N (2008). "Vitamin E for Alzheimer's disease and mild cognitive impairment". Cochrane Database Syst Rev 

(3): CD002854. doi:10.1002/14651858.CD002854.pub2. PMID 18646084. 
[128] Malouf R, Grimley Evans J (2008). "Folic acid with or without vitamin B for the prevention and treatment of healthy elderly and 

demented people". Cochrane Database Syst Rev (4): CD004514. doi:10.1002/14651858.CD004514.pub2. PMID 18843658. 
[129] Wald DS, Kasturiratne A, Simmonds M (June 2010). "Effect of folic acid, with or without other B vitamins, on cognitive decline: 

meta-analysis of randomized trials". The American Journal of Medicine 123 (6): 522— 527. e2. doi:10.1016/j.amjmed.2010.01.017. 

PMID 20569758. 
[130] Szekely CA, Town T, Zandi PP (2007). "NSAIDs for the chemoprevention of Alzheimer's disease". Subcell Biochem 42: 229^48. 

doi:10.1007/l-4020-5688-5_ll. PMID 17612054. 
[131] Ringman JM, Frautschy SA, Cole GM, Masterman DL, Cummings JL (April 2005). "A potential role of the curry spice curcumin in 

Alzheimer's disease" ( Curr Alzheimer Res 2 (2): 

131-6. doi:10.2174/1567205053585882. ISSN 1567-2050. PMID 15974909. PMC 1702408. 
[132] Aggarwal BB, Harikumar KB (January 2009). "Potential therapeutic effects of curcumin, the anti-inflammatory agent, against 

neurodegenerative, cardiovascular, pulmonary, metabolic, autoimmune and neoplastic diseases" ( 

articlerender.fcgi?tool=pmcentrez&artid=2637808). Int J Biochem CellBiol4l (1): 40-59. doi:10.1016/j.biocel.2008.06.010. 

PMID 18662800. PMC 2637808. 
[133] Farquhar C, Marjoribanks J, Lethaby A, Suckling JA, Lamberts Q (15 April 2009). "Long term hormone therapy for perimenopausal and 

postmenopausal women". Cochrane Database Syst Rev (2): CD004143. doi:10.1002/14651858.CD004143.pub3. PMID 19370593. 
[134] Barrett-Connor, E; Laughlin, GA (May 2009). "Endogenous and exogenous estrogen, cognitive function, and dementia in postmenopausal 

women: evidence from epidemiologic studies and clinical trials" ( 

artid=2701737). Semin Reprod Med 27 (3): 275-82. doi:10.1055/s-0029-1216280. PMID 19401958. PMC 2701737. 
[135] Birks J, Grimley Evans J (2009). "Ginkgo biloba for cognitive impairment and dementia" ( 

clsysrev/articles/CD003120/frame.html). Cochrane Database Syst Rev (1): CD003120. doi:10.1002/14651858.CD003120.pub3. 

PMID 19160216. . Retrieved 2009-08-13. 

Alzheimer's disease 102 

[136] DeKosky ST, Williamson JD, Fitzpatrick AL et al. (2008). "Ginkgo biloba for Prevention of Dementia" ( 

content/full/300/19/2253). Journal of the American Medical Association 300 (19): 2253-2262. doi: 10.1001/jama.2008.683. 

PMID 19017911. PMC 2823569. . Retrieved 2008-11-18. 
[137] Eskelinen MH, Ngandu T, Tuomilehto J, Soininen H, Kivipelto M (January 2009). "Midlife coffee and tea drinking and the risk of late-life 

dementia: a population-based CAIDE study". J Alzheimers Dis 16 (1): 85-91. doi:10.3233/JAD-2009-0920 (inactive 2010-08-25). 

PMID 19158424. 
[138] Stern Y (July 2006). "Cognitive reserve and Alzheimer disease". Alzheimer Disease and Associated Disorders 20 (2): 112—117. 

doi:10.1097/01.wad.0000213815.20177.19. ISSN 0893-0341. PMID 16917199. 
[139] Paradise M, Cooper C, Livingston G (February 2009). "Systematic review of the effect of education on survival in Alzheimer's disease". 

Int Psychogeriatr 21 (1): 25-32. doi:10.1017/S1041610208008053. PMID 19026089. 
[140] Shcherbatykh I, Carpenter DO (May 2007). "The role of metals in the etiology of Alzheimer's disease". J Alzheimers Dis 11 (2): 191—205. 

PMID 17522444. 
[141] Rondeau V, Commenges D, Jacqmin-Gadda H, Dartigues JF (July 2000). "Relation between aluminum concentrations in drinking water 

and Alzheimer's disease: an 8-year follow-up study" ( 

artid=2215380). Am J Epidemiol 152 (1): 59-66. doi:10.1093/aje/152.1.59. PMID 10901330. PMC 2215380. 
[142] Kukull WA, Larson EB, Bowen JD, et al. (June 1995). "Solvent exposure as a risk factor for Alzheimer's disease: a case-control study". 

Am J Epidemiol 141 (11): 1059-71; discussion 1072-9. PMID 7771442. 
[143] Santibanez M, Bolumar F, Garcia AM (2007). "Occupational risk factors in Alzheimer's disease: a review assessing the quality of 

published epidemiological studies". Occupational and Environmental Medicine 64 (11): 723—732. doi: 10. 11 36/oem. 2006. 028209. 

PMID 17525096. 
[144] Seidler A, Geller P, Nienhaus A, et al. (February 2007). "Occupational exposure to low frequency magnetic fields and dementia: a 

case-control study" ( Occup Environ Med 64 (2): 

108-14. doi:10.1136/oem.2005.024190. PMID 17043077. PMC 2078432. 
[145] Rondeau V (2002). "A review of epidemiologic studies on aluminum and silica in relation to Alzheimer's disease and associated disorders". 

Rev Environ Health 17 (2): 107-21. PMID 12222737. 
[146] Martyn CN, Coggon DN, Inskip H, Lacey RF, Young WF (May 1997). "Aluminum concentrations in drinking water and risk of 

Alzheimer's disease". Epidemiology 8 (3): 281-6. doi: 10.1097/00001648-199705000-00009. PMID 9115023. 
[147] Graves AB, Rosner D, Echeverria D, Mortimer JA, Larson EB (September 1998). "Occupational exposures to solvents and aluminium and 

estimated risk of Alzheimer's disease" ( Occup 

Environ Med 55 (9): 627-33. doi:10.1136/oem.55.9.627. PMID 9861186. PMC 1757634. 
[148] Scientific Committee on Emerging and Newly Identified Health Risks-SCENIHR (January 2009). Health Effects of Exposure to EMF 

( Brussels: Directorate General for 

Health&Consumers; European Commission, pp. 4—5. . Retrieved 2010-04-27 
[149] Cataldo JK, Prochaska JJ, Glantz SA (2010). "Cigarette smoking is a risk factor for Alzheimer's disease: an analysis controlling for tobacco 

industry affiliation". J Alzheimers Dis 19 (2): 465-80. doi:10.3233/JAD-2010-1240 (inactive 2010-08-25). PMID 20110594. 
[150] Eikelenboom, P; Van Exel, E; Hoozemans, JJ; Veerhuis, R; Rozemuller, AJ; Van Gool, WA (2010). "Neuroinflammation - an early event 

in both the history and pathogenesis of Alzheimer's disease.". Neuro-degenerative diseases 7 (1-3): 38—41. doi: 10.1 159/000283480. 

PMID 20160456. 
[151] Geula C, Mesulam MM (1995). "Cholinesterases and the pathology of Alzheimer disease". Alzheimer Dis Assoc Disord 9 Suppl 2: 23—28. 

PMID 8534419. 
[152] Stahl SM (2000). "The new cholinesterase inhibitors for Alzheimer's disease, Part 2: illustrating their mechanisms of action". J Clin 

Psychiatry 61 (11): SU-8U. doi:10.4088/JCP.v61nll01. PMID 11105732. 
[153] "Donepezil" ( Medline Plus. US National Library of Medicine. 

2007-01-08. . Retrieved 2010-02-03. 
[154] "Galantamine" ( Medline Plus. US National Library of Medicine. 

2007-01-08. . Retrieved 2010-02-03. 
[155] "Rivastigmine" ( Medline Plus. US National Library of 

Medicine. 2007-01-08. . Retrieved 2010-02-03. 
[156] "Rivastigmine Transdermal" ( Medline Plus. US National 

Library of Medicine. 2007-01-08. . Retrieved 2010-02-03. 
[157] Birks J; Birks, Jacqueline (2006). "Cholinesterase inhibitors for Alzheimer's disease". Cochrane Database Syst Rev (1): CD005593. 

doi:10.1002/14651858.CD005593. PMID 16437532. 
[158] Birks J, Grimley Evans J, Iakovidou V, Tsolaki M, Holt FE (2009-04-15). "Rivastigmine for Alzheimer's disease". Cochrane Database 

Syst Rev (2): CD001191. doi:10.1002/14651858.CD001191.pub2. PMID 19370562. 
[159] Birks J, Harvey RJ (2006-01-25). "Donepezil for dementia due to Alzheimer's disease". Cochrane Database Syst Rev (1): CD001 190. 

doi:10.1002/14651858.CD001190.pub2. PMID 16437430. 
[160] Raschetti R, Albanese E, Vanacore N, Maggini M (2007). "Cholinesterase inhibitors in mild cognitive impairment: a systematic review of 

randomised trials" ( PLoS Med 4 (11): e338. 

doi:10.1371/journal.pmed.0040338. PMID 18044984. PMC 2082649. 

Alzheimer's disease 103 

[161] Acetylcholinesterase inhibitors prescribing information: 

• "Aricept Prescribing information" ( (PDF). Eisai and Pfizer. 
. Retrieved 2008-08-18. (primary source) 

• "Razadyne ER U.S. Full Prescribing Information" (http://web.archive.Org/web/20080528195504/ 
pages/pdf/razadyne_er.pdf) (PDF). Ortho-McNeil Neurologies. . Retrieved 2008-02-19. (primary source) 

• "Exelon ER U.S. Prescribing Information" (http://web.archive.Org/web/20070728014715/ 
product/pi/pdf/exelonpatch.pdf) (PDF). Novartis Pharmaceuticals. Archived from the original ( 
product/pi/pdf/exelonpatch.pdf) on 2007-07-28. . Retrieved 2008-02-19. (primary source) 

• "Exelon U.S. Prescribing Information" (http://web.archive.Org/web/20070710074347/ 
020823s016, 021025s0081bl.pdf) (PDF). Novartis Pharmaceuticals. June 2006. Archived from the original (http://www.accessdata.fda. 
gov/drugsatfda_docs/label/2006/020823s016,021025s0081bl.pdf) on 2007-07-10. . Retrieved 2009-07-30. (primary source) 

• "Exelon Warning Letter" ( 
EnforcementActivitiesbyFDA/WarningLettersandNoticeofViolationLetterstoPharmaceuticalCompanies/ucm0541 80.pdf) (PDF). US 
Food and Drug Administration. August 2007. . Retrieved 2009-07-30. 

[162] Lipton SA (2006). "Paradigm shift in neuroprotection by NMDA receptor blockade: memantine and beyond". Nat Rev Drug Discov 5 (2): 

160-170. doi:10.1038/nrdl958. PMID 16424917. 
[163] "Memantine" ( US National Library of Medicine (Medline). 

2004-01-04. . Retrieved 2010-02-03. 
[164] Areosa Sastre A, McShane R, Sherriff F (2004). "Memantine for dementia". Cochrane Database Syst Rev (4): CD003154. 

doi:10.1002/14651858.CD003154.pub2. PMID 15495043. 
[165] "Namenda Prescribing Information" ( (PDF). Forest Pharmaceuticals. . Retrieved 2008-02-19. 

(primary source) 
[166] Raina P, Santaguida P, Ismaila A, et al. (2008). "Effectiveness of cholinesterase inhibitors and memantine for treating dementia: evidence 

review for a clinical practice guideline". Annals of Internal Medicine 148 (5): 379—397. PMID 183 16756. 
[167] Antipsychotics use: 

• Ballard C, Waite J (2006). "The effectiveness of atypical antipsychotics for the treatment of aggression and psychosis in Alzheimer's 
disease". Cochrane Database Syst Rev (1): CD003476. doi:10.1002/14651858.CD003476.pub2. PMID 16437455. 

• Ballard C, Lana MM, Theodoulou M, et al. (2008). "A randomised, blinded, placebo-controlled trial in dementia patients continuing or 
stopping neuroleptics (The DART- AD trial)" (http://www.pubmedcentral.nih. gov/ articlerender.fcgi?tool=pmcentrez& 
artid=2276521). PLoS Med 5 (4): e76. doi:10.1371/journal.pmed.0050076. PMID 18384230. PMC 2276521. 

• Sink KM, Holden KF, Yaffe K (2005). "Pharmacological treatment of neuropsychiatric symptoms of dementia: a review of the evidence". 
J Am Med Assoc 293 (5): 596-608. doi:10.1001/jama.293.5.596. PMID 15687315. 

[168] Ballard C, Hanney ML, Theodoulou M, Douglas S, McShane R, Kossakowski K, Gill R, Juszczak E, Yu L-M, Jacoby R (9 January 2009). 

"The dementia antipsychotic withdrawal trial (DART-AD): long-term follow-up of a randomised placebo-controlled trial". Lancet Neurology 

8 (2): 151. doi:10.1016/S1474-4422(08)70295-3. PMID 19138567. Lay summary ( 
[169] Eubanks LM, Rogers CJ, Beuscher AE, et al. (November 2006). "A molecular link between the active component of marijuana and 

Alzheimer's disease pathology" ( (Free full text). 

Molecular Pharmaceutics 3 (6): 773-7. doi:10.1021/mp060066m. ISSN 1543-8384. PMID 17140265. PMC 2562334. 
[170] Campbell VA, Gowran A (2007 November). "Alzheimer's disease; taking the edge off with cannabinoids?" (http://www.pubmedcentral. ?tool=pmcentrez&artid=2190031). Br J Pharmacol 152 (5): 655-62. doi:10.1038/sj.bjp.0707446. 

PMID 17828287. PMC 2190031. 
[171] "Practice Guideline for the Treatment of Patients with Alzheimer's disease and Other Dementias" ( 

pracGuide/loadGuidelinePdf.aspx?file=AlzPG101007) (PDF). American Psychiatric Association. October 2007. 

doi:10.1176/appi.books.9780890423967.152139. . Retrieved 2007-12-28. 
[172] Bottino CM, Carvalho IA, Alvarez AM, et al. (2005). "Cognitive rehabilitation combined with drug treatment in Alzheimer's disease 

patients: a pilot study". ClinRehabil 19 (8): 861-869. doi:10.1191/0269215505cr911oa. PMID 16323385. 
[173] Doody RS, Stevens JC, Beck C, et al. (2001). "Practice parameter: management of dementia (an evidence-based review). Report of the 

Quality Standards Subcommittee of the American Academy of Neurology". Neurology 56 (9): 1154—1166. PMID 11342679. 
[174] Hermans DG, Htay UH, McShane R (2007). "Non-pharmacological interventions for wandering of people with dementia in the domestic 

setting". Cochrane Database Syst Rev (1): CD005994. doi:10.1002/14651858.CD005994.pub2. PMID 17253573. 
[175] Robinson L, Hutchings D, Dickinson HO, et al. (2007). "Effectiveness and acceptability of non-pharmacological interventions to reduce 

wandering in dementia: a systematic review". Int J Geriatr Psychiatry 22 (1): 9—22. doi:10. 1002/gps.l643. PMID 17096455. 
[176] Woods B, Spector A, Jones C, Orrell M, Davies S (2005). "Reminiscence therapy for dementia". Cochrane Database Syst Rev (2): 

CD001120. doi:10.1002/14651858.CD001120.pub2. PMID 15846613. 
[177] Zetteler J (November 2008). "Effectiveness of simulated presence therapy for individuals with dementia: a systematic review and 

meta-analysis". Aging Ment Health 12 (6): 779-85. doi:10.1080/13607860802380631. PMID 19023729. 
[178] Neal M, Briggs M (2003). "Validation therapy for dementia". Cochrane Database Syst Rev (3): CD001394. 

doi:10.1002/14651858.CD001394. PMID 12917907. 

Alzheimer's disease 104 

[179] Chung JC, Lai CK, Chung PM, French HP (2002). "Snoezelen for dementia". Cochrane Database Syst Rev (4): CD003 152. 

doi:10.1002/14651858.CD003152. PMID 12519587. 
[180] Spector A, Orrell M, Davies S, Woods B (2000). "Withdrawn: Reality orientation for dementia". Cochrane Database Syst Rev (3): 

CD001119. doi:10.1002/14651858.CD001119.pub2. PMID 17636652. 
[181] Spector A, Thorgrimsen L, Woods B, et al. (2003). "Efficacy of an evidence-based cognitive stimulation therapy programme for people 

with dementia: randomised controlled trial". Br J Psychiatry 183: 248-254. doi:10.1192/bjp.l83.3.248. PMID 12948999. 
[182] Gitlin LN, Corcoran M, Winter L, Boyce A, Hauck WW (1 February 2001). "A randomized, controlled trial of a home environmental 

intervention: effect on efficacy and upset in caregivers and on daily function of persons with dementia" (http://gerontologist. ?view=long&pmid=l 1220813). Gerontologist 41 (1): 4—14. PMID 11220813. . Retrieved 

[183] Gitlin LN, Hauck WW, Dennis MP, Winter L (March 2005). "Maintenance of effects of the home environmental skill-building program for 

family caregivers and individuals with Alzheimer's disease and related disorders". J. Gerontol. A Biol. Sci. Med. Sci. 60 (3): 368—74. 

PMID 15860476. 
[184] "Treating behavioral and psychiatric symptoms" (http://web.archive.Org/web/20060925112503/ 

Treating/agitation. asp). Alzheimer's Association. 2006. . Retrieved 2006-09-25. 
[185] Dunne TE, Neargarder SA, Cipolloni PB, Cronin-Golomb A (2004). "Visual contrast enhances food and liquid intake in advanced 

Alzheimer's disease". Clinical Nutrition 23 (4): 533-538. doi:10.1016/j.clnu.2003.09.015. PMID 15297089. 
[186] Dudek, Susan G. (2007). Nutrition essentials for nursing practice (http://books. google. com/?id=01zo6yf0IUEC&pg=PA360& 

dq=alzheimer's+chew). Hagerstown, Maryland: Lippincott Williams & Wilkins. p. 360. ISBN 0-7817-6651-6. . Retrieved 2008-08-19. 
[187] Dennehy C (2006). "Analysis of patients' rights: dementia and PEG insertion". BrJNurs 15 (1): 18-20. PMID 16415742. 
[188] Chernoff R (April 2006). "Tube feeding patients with dementia". Nutr Clin Pract 21 (2): 142-6. doi: 10. 1 177/01 15426506021002142. 

PMID 16556924. 
[189] Gambassi G, Landi F, Lapane KL, Sgadari A, Mor V, Bernabei R (July 1999). "Predictors of mortality in patients with Alzheimer's disease 

living in nursing homes" ( J. Neurol. Neurosurg. 

Psychiatr. 67 (1): 59-65. doi:10.1136/jnnp.67.1.59. PMID 10369823. PMC 1736445. 
[190] Medical issues: 

• Head B (January 2003). "Palliative care for persons with dementia". Home Healthc Nurse 21 (1): 53—60; quiz 61. 
doi: 10.1097/00004045-200301000-00012. PMID 12544465. 

• Friedlander AH, Norman DC, Mahler ME, Norman KM, Yagiela JA (September 2006). "Alzheimer's disease: psychopathology, medical 
management and dental implications". J Am Dent Assoc 137 (9): 1240-51. PMID 16946428. 

• Belmin J; Expert Panel and Organisation Committee (2007). "Practical guidelines for the diagnosis and management of weight loss in 
Alzheimer's disease: a consensus from appropriateness ratings of a large expert panel". J Nutr Health Aging 11 (1): 33—7. 

PMID 17315078. 

• McCurry SM, Gibbons LE, Logsdon RG, Vitiello M, Teri L (October 2003). "Training caregivers to change the sleep hygiene practices of 
patients with dementia: the NITE-AD project". J Am Geriatr Soc 51 (10): 1455-60. doi:10.1046/j.l532-5415.2003.51466.x. 

PMID 14511168. 

• Perls TT, Herget M (December 1995). "Higher respiratory infection rates on an Alzheimer's special care unit and successful intervention". 
J Am Geriatr Soc 43 (12): 1341^. PMID 7490383. 

[191] Shega JW, Levin A, Hougham GW, et al. (April 2003). "Palliative Excellence in Alzheimer Care Efforts (PEACE): a program 

description". J Palliat Med 6 (2): 315-20. doi:10.1089/109662103764978641. PMID 12854952. 
[192] Mitchell SL, Teno JM, Kiely DK, et al. (Oct 2009). "The clinical course of advanced dementia" ( 

articlerender.fcgi?tool=pmcentrez&artid=2778850). N Engl J Med 361 (16): 1529-38. doi:10.1056/NEJMoa0902234. PMID 19828530. 

PMC 2778850. 
[193] Bowen JD, Maker AD, Sheppard L, et al. (August 1996). "Predictors of mortality in patients diagnosed with probable Alzheimer's 

disease". Neurology 47 (2): 433-9. PMID 8757016. 
[194] Dodge HH, Shen C, Pandav R, DeKosky ST, Ganguli M (February 2003). "Functional transitions and active life expectancy associated 

with Alzheimer disease". Arch. Neurol. 60 (2): 253-9. doi:10.1001/archneur.60.2.253. PMID 12580712. 
[195] Larson EB, Shadlen MF, Wang L, et al. (April 2004). "Survival after initial diagnosis of Alzheimer disease". Ann. Intern. Med. 140 (7): 

501-9. PMID 15068977. 
[196] Jagger C, Clarke M, Stone A (January 1995). "Predictors of survival with Alzheimer's disease: a community-based study". Psychol Med 25 

(1): 171-7. doi:10.1017/S0033291700028191. PMID 7792352. 
[197] Ganguli M, Dodge HH, Shen C, Pandav RS, DeKosky ST (May 2005). "Alzheimer disease and mortality: a 15-year epidemiological 

study". Arch. Neurol. 62 (5): 779-84. doi:10.1001/archneur.62.5.779. PMID 15883266. 
[198] Bermejo-Pareja F, Benito-Leon J, Vega S, Medrano MJ, Roman GC (January 2008). "Incidence and subtypes of dementia in three elderly 

populations of central Spain". J. Neurol. Sci. 264 (1-2): 63-72. doi:10.1016/j.jns.2007.07.021. PMID 17727890. 
[199] Di Carlo A, Baldereschi M, Amaducci L, et al. (January 2002). "Incidence of dementia, Alzheimer's disease, and vascular dementia in 

Italy. The ILSA Study". J Am Geriatr Soc 50 (1): 41-8. doi:10.1046/j.l532-5415.2002.50006.x. PMID 12028245. 
[200] Andersen K, Launer LJ, Dewey ME, et al. (December 1999). "Gender differences in the incidence of AD and vascular dementia: The 

EURODEM Studies. EURODEM Incidence Research Group". Neurology 53 (9): 1992-7. PMID 10599770. 

Alzheimer's disease 105 

[201] 2000 U.S. estimates: 

• Hebert LE, Scherr PA, Bienias JL, Bennett DA, Evans DA (August 2003). "Alzheimer disease in the US population: prevalence estimates 
using the 2000 census". Arch. Neurol. 60 (8): 1119-22. doi:10.1001/archneur.60.8.1119. PMID 12925369. 

• "Profiles of general demographic characteristics, 2000 census of population and housing, United States" ( 
cen2000/dpl/2kh00.pdf) (PDF). U.S. Census Bureau. 2001. . Retrieved 2008-08-27. 

[202] Ferri CP, Prince M, Brayne C, et al. (December 2005). "Global prevalence of dementia: a Delphi consensus study" (http://web. archive. 

org/web/20080625071754/http://www. (PDF). Lancer 366 (9503): 2112—7. 

doi:10.1016/S0140-6736(05)67889-0. PMID 16360788. PMC 2850264. . Retrieved 2008-06-25. 
[203] World Health Organization (2006). Neurological Disorders: Public Health Challenges ( 

neurodiso/en/index.html). Switzerland: World Health Organization, pp. 204-207. ISBN 978-92-4-156336-9. . 
[204] AugusteD.: 

• Alzheimer Alois (1907). "Uber eine eigenartige Erkrankung der Hirnrinde [About a peculiar disease of the cerebral cortex]" (in 
(German)). Allgemeine Zeitschrift fur Psychiatric und Psychisch-Gerichtlich Medizin 64 (1—2): 146—148. 

• Alzheimer Alois (1987). "About a peculiar disease of the cerebral cortex. By Alois Alzheimer, 1907 (Translated by L. Jarvik and H. 
Greenson)". Alzheimer Dis Assoc Disord 1(1): 3-8. PMID 3331112. 

• Maurer Ulrike, Maurer Konrad (2003). Alzheimer: the life of a physician and the career of a disease. New York: Columbia University 
Press, p. 270. ISBN 0-231-11896-1. 

[205] Berrios G E (1990). "Alzheimer's disease: a conceptual history". Int. J. Ger. Psychiatry 5: 355-365. doi:10.1002/gps.930050603. 

[206] Kraepelin Emil, Diefendorf A. Ross (translated by) (2007-01-17). Clinical Psychiatry: A Textbook For Students And Physicians (Reprint). 

Kessinger Publishing, p. 568. ISBN 1-4325-0833-4. 
[207] Katzman Robert, Terry Robert D, Bick Katherine L (editors) (1978). Alzheimer's disease: senile dementia and related disorders. New 

York: Raven Press, p. 595. ISBN 0-89004-225-X. 
[208] Boiler F, Forbes MM (June 1998). "History of dementia and dementia in history: an overview". / Neurol. Sci. 158 (2): 125—33. 

doi:10.1016/S0022-510X(98)00128-2. PMID 9702682. 
[209] Amaducci LA, Rocca WA, Schoenberg BS (November 1986). "Origin of the distinction between Alzheimer's disease and senile dementia: 

how history can clarify nosology". Neurology 36 (11): 1497-9. PMID 3531918. 
[210] Allegri RF, Butman J, Arizaga RL, et al. (August 2007). "Economic impact of dementia in developing countries: an evaluation of costs of 

Alzheimer-type dementia in Argentina". Int Psychogeriatr 19 (4): 705-18. doi:10.1017/S 1041610206003784. PMID 16870037. 
[211] Suh GH, Knapp M, Kang CJ (August 2006). "The economic costs of dementia in Korea, 2002". Int J Geriatr Psychiatry 21 (8): 722—8. 

doi:10.1002/gps.l552. PMID 16858741. 
[212] Wimo A, JonssonL, Winblad B (2006). "An estimate of the worldwide prevalence and direct costs of dementia in 2003". Dement Geriatr 

Cogn Disord 21 (3): 175-81. doi: 10.1 159/000090733. PMID 16401889. 
[213] Moore MJ, Zhu CW, Clipp EC (July 2001). "Informal costs of dementia care: estimates from the National Longitudinal Caregiver Study". 

J Gerontol B Psychol Sci Soc Sci 56 (4): S219-28. PMID 1 1445614. 
[214] Jonsson L, Eriksdotter Jonhagen M, Kilander L, et al. (May 2006). "Determinants of costs of care for patients with Alzheimer's disease". 

Int J Geriatr Psychiatry 21 (5): 449-59. doi:10.1002/gps.l489. PMID 16676288. 
[215] Zhu CW, Sano M (2006). "Economic considerations in the management of Alzheimer's disease" ( 

articlerender.fcgi?tool=pmcentrez&artid=2695165). Clin lnterv Aging 1 (2): 143-54. doi:10.2147/ciia.2006.1.2.143. PMID 18044111. 

PMC 2695165. 
[216] Gaugler JE, Kane RL, Kane RA, Newcomer R (April 2005). "Early community-based service utilization and its effects on 

institutionalization in dementia caregiving". Gerontologist 45 (2): 177—85. PMID 15799982. 
[217] Ritchie K, Lovestone S (November 2002). "The dementias". Lancet 360 (9347): 1759-66. doi:10.1016/S0140-6736(02)11667-9. 

PMID 12480441. 
[218] Brodaty H, Hadzi-Pavlovic D (September 1990). "Psychosocial effects on carers of living with persons with dementia". AustNZ J 

Psychiatry 24 (3): 351-61. doi:10.3109/00048679009077702. PMID 2241719. 
[219] Donaldson C, Tarrier N, Burns A (April 1998). "Determinants of carer stress in Alzheimer's disease". Int J Geriatr Psychiatry 13 (4): 

248-56. doi:10.1002/(SICI)1099-1166(199804)13:4<248::AID-GPS770>3.0.CO;2-0. PMID 9646153. 
[220] "The MetLife Study of Alzheimer's Disease: The Caregiving Experience" ( 

14050063731 156260663VlFAlzheimerCaregivingExperience.pdf) (PDF). MetLife Mature Market Institute. August 2006. . Retrieved 

[221] Pusey H, Richards D (May 2001). "A systematic review of the effectiveness of psychosocial interventions for carers of people with 

dementia". Aging Ment Health 5 (2): 107-19. doi: 10.1080/13607860120038302. PMID 1 15 1 1058. 
[222] Garrard P, Maloney LM, Hodges JR, Patterson K (February 2005). " The effects of very early Alzheimer's disease on the characteristics of 

writing by a renowned author (http://brain.oxfordjournals.Org/cgi/content/full/128/2/250)". Brain 128 (Pt 2): 250—60. 

doi:10.1093/brain/awh341. PMID 15574466. 
[223] Sherman FT (September 2004). " Did President Reagan have mild cognitive impairment while in office? Living longer with Alzheimer's 

Disease (". Geriatrics 59 (9): 11, 15. 

PMID 15461232. 

Alzheimer's disease 106 

[224] Venneri A, Forbes-Mckay KE, Shanks MF (April 2005). "Impoverishment of spontaneous language and the prediction of Alzheimer's 

disease". Brain 128 (Pt4): E27. doi:10.1093/brain/awh419. PMID 15788549. 
[225] "Hungary legend Puskas dies at 79" ( BBC News. 2006-11-17. . 

Retrieved 2008-01-25. 
[226] "Prime Ministers in History: Harold Wilson" ( 

harold-wilson). London: 10 Downing Street. . Retrieved 2008-08-18. 
[227] "Mi padre no reconocio al Rey pero noto el carifio" ( 

elpepiesp/20080718elpepinac_ll/Tes). Madrid: El Pais. 2008. . Retrieved 2008-10-01. 
[228] "Chicago Rita Hayworth Gala" ( Alzheimer's Association. 2007. . Retrieved 2010-02-03. 
[229] "Charlton Heston has Alzheimer's symptoms" ( CNN. 2002-08-09. . 

Retrieved 2008-01-25. 
[230] Pauli Michelle (2007-12-12). "Pratchett announces he has Alzheimer's" ( 

michellepaulil). London: Guardian News and Media. . Retrieved 2008-08-18. 
[231] "Nobel Prize Winner has Alzheimer's" ( The Straits 

Times. 2009-10-08. . Retrieved 2009-10-09. 
[232] "Iris" ( IMDB. 2002-01-18. . Retrieved 2008-01-24. 
[233] Bayley John (2000). Iris: a memoir of Iris Murdoch. London: Abacus. ISBN 97803491 12152. OCLC 41960006. 
[234] "The notebook" ( IMDB. . Retrieved 2008-02-22. 
[235] Sparks Nicholas (1996). The notebook. Thorndike, Maine: Thorndike Press, p. 268. ISBN 078620821X. 

[236] "Thanmathra" ( . Retrieved 2008-01-24. 
[237] "Ashita no kioku" ( IMDB. . Retrieved 2008-01-24. 

[238] Ogiwara Hiroshi (2004) (in (Japanese)). Ashita no Kioku. Tokyo: Kobunsha. ISBN 9784334924461. OCLC 57352130. 
[239] Munro Alice (2001). Hateship, Friendship, Courtship, Loveship, Marriage: Stories. New York: A.A. Knopf. ISBN 9780375413001. 

OCLC 46929223. The bear came over the mountain. 
[240] Malcolm and Barbara: 

• "Malcolm and Barbara: A love story" ( Dfgdocs. . Retrieved 2008-01-24. 

• "Malcolm and Barbara: A love story" ( 
shtml). BBC Cambridgeshire. . Retrieved 2008-03-02. 

• Plunkett, John (2007-08-07). "Alzheimer's film-maker to face ITV lawyers" ( 
broadcasting. itv). London: Guardian Media. . Retrieved 2008-01-24. 

[241] "Clinical Trials. Found 459 studies with search of: alzheimer" ( US National 
Institutes of Health. . Retrieved 2008-03-23. 

[242] Lashuel HA, Hartley DM, Balakhaneh D, Aggarwal A, Teichberg S, Callaway DJE (2002). "New class of inhibitors of [[Beta 
amyloidlamyloid-beta (] fibril formation. Implications for the mechanism of 
pathogenesis in Alzheimer's disease"]. J Biol Chem 277 (45): 42881-42890. doi:10.1074/jbc.M206593200. PMID 12167652. . 

[243] Dodel r, Neff F, Noelker C, Pul R, Du Y, Bacher M Oertel W. (2010). "Intravenous Immunoglobulins as a Treatment for Alzheimer's 
Disease: Rationale and Current Evidence" ( 

Intravenous_Immunoglobulins_as_a_Treatmen t_for.l.aspx). Drugs 70 (5): 513-528. doi: 10.2165/1 1533070-000000000-00000. 
PMID 20329802. . 

[244] Vaccination: 

• Hawkes CA, McLaurin J (November 2007). "Immunotherapy as treatment for Alzheimer's disease". Expert Rev Neurother 7(11): 
1535-48. doi:10.1586/14737175.7.11.1535. PMID 17997702. 

• Solomon B (June 2007). "Clinical immunologic approaches for the treatment of Alzheimer's disease". Expert Opin Investig Drugs 16 (6): 
819-28. doi:10.1517/13543784.16.6.819. PMID 17501694. 

• Woodhouse A, Dickson TC, Vickers JC (2007). "Vaccination strategies for Alzheimer's disease: A new hope?". Drugs Aging 24 (2): 
107-19. doi: 10.2165/000025 12-200724020-00003. PMID 17313199. 

[245] "Study Evaluating ACC-001 in Mild to Moderate Alzheimers Disease Subjects" ( 

NCT00498602). Clinical Trial. US National Institutes of Health. 2008-03-11. . Retrieved 2008-06-05. 
[246] "Study Evaluating Safety, Tolerability, and Immunogenicity of ACC-001 in Subjects With Alzheimer's Disease" ( 

ct2/show/NCT00479557). US National Institutes of Health. . Retrieved 2008-06-05. 
[247] "Alzheimer's Disease Vaccine Trial Suspended on Safety Concern" ( 

Medpage Today. 2008-04-18. . Retrieved 2008-06-14. 
[248] "Bapineuzumab in Patients With Mild to Moderate Alzheimer's Disease/ Apo_e4 non-carriers" ( 

NCT00574132). Clinical Trial. US National Institutes of Health. 2008-02-29. . Retrieved 2008-03-23. 
[249] "Safety, Tolerability and Efficacy Study to Evaluate Subjects With Mild Cognitive Impairment" ( 

NCT00422981). Clinical Trial. US National Institutes of Health. 2008-03-11. . Retrieved 2008-03-23. 
[250] "Study Evaluating the Safety, Tolerability and Efficacy of PBT2 in Patients With Early Alzheimer's Disease" ( 

ct2/show/NCT00471211). Clinical Trial. US National Institutes of Health. 2008-01-13. . Retrieved 2008-03-23. 
[251] Etanercept research: 

Alzheimer's disease 107 

• Tobinick E, Gross H, Weinberger A, Cohen H (2006). "TNF-alpha modulation for treatment of Alzheimer's disease: a 6-month pilot 
study" ( MedGenMed 8 (2): 25. 

PMID 16926764. PMC 1785182. 

• Griffin WS (2008). "Perispinal etanercept: potential as an Alzheimer therapeutic" ( 
fcgi?tool=pmcentrez&artid=2241592). J Neuroinflammation 5: 3. doi:10.1186/1742-2094-5-3. PMID 18186919. PMC 2241592. 

• Tobinick E (December 2007). "Perispinal etanercept for treatment of Alzheimer's disease". Curr Alzheimer Res 4 (5): 550—2. 
doi: 10.2174/156720507783018217. PMID 18220520. 

[252] Wischik Claude M, Bentham Peter, Wischik Damon J, Seng Kwang Meng (July 2008). "Tau aggregation inhibitor (TAI) therapy with 
remberTM arrests disease progression in mild and moderate Alzheimer's disease over 50 weeks" ( 

AKey={50E1744A-0C52-45B2-BF85-2A798BF24E02}). Alzheimer's & Dementia (Alzheimer's Association) 4 (4): T167. 
doi:10.1016/j.jalz.2008.05.438. . Retrieved 2008-07-30. 

[253] Harrington Charles, Rickard Janet E, Horsley David, et al. (July 2008). "Methylthioninium chloride (MTC) acts as a Tau aggregation 
inhibitor (TAI) in a cellular model and reverses Tau pathology in transgenic mouse models of Alzheimer's disease". Alzheimer's & Dementia 
(Alzheimer's Association) 4: T120-T121. doi:10.1016/j.jalz.2008.05.259. 

[254] Doody RS, Gavrilova SI, Sano M, et al. (July 2008). "Effect of dimebon on cognition, activities of daily living, behaviour, and global 
function in patients with mild-to-moderate Alzheimer's disease: a randomised, double-blind, placebo-controlled study". Lancet ill (9634): 
207-15. doi:10.1016/S0140-6736(08)61074-0. PMID 18640457. 

[255] Dimebon Disappoints in Phase 3 Trial ( 

[256] Wozniak M, Mee A, Itzhaki R (2008). "Herpes simplex virus type 1 DNA is located within Alzheimer's disease amyloid plaques". ] Pathol 
1Y1 (1): 131-138. doi:10.1002/path.2449. PMID 18973185. 

[257] Newberg, AB; Wintering, N; Khalsa, DS; Roggenkamp, H; Waldman, MR (2010). "Meditation effects on cognitive function and cerebral 
blood flow in subjects with memory loss: a preliminary study" ( Journal of Alzheimer's 
Disease 20 (2): 517-26. doi:10.3233/JAD-2010-1391 (inactive 2010-08-25). PMID 20164557. . (primary source) 

Further reading 

• Alzheimer' s Disease: Unraveling the Mystery ( 
UnravelingTheMystery). US Department of Health and Human Services, National Institute on Aging, NIH. 2008. 

• Can Alzheimer's Disease Be Prevented? ( 
US Department of Health and Human Services, National Institute on Aging, NIH. 2009. 

• Caring for a Person with Alzheimer's Disease: Your Easy-to-Use Guide from the National Institute on Aging 
( US Department of Health and Human 
Services, National Institute on Aging, NIH. 2009. 

• Cummings JL, Frank JC, Cherry D, Kohatsu ND, Kemp B, Hewett L, Mittman B (2002). "Guidelines for 
managing Alzheimer's disease: Part I. Assessment" ( 
American Family Physician 65 (11): 2263-2272. PMID 12074525. 

• Cummings JL, Frank JC, Cherry D, Kohatsu ND, Kemp B, Hewett L, Mittman B (2002). "Guidelines for 
managing Alzheimer's disease: Part II. Treatment" ( 
American Family Physician 65 (12): 2525-2534. PMID 12086242. 

• Russell D, Barston S, White M (2007-12-19). "Alzheimer's Behavior Management: Learn to manage common 
behavior problems " (http :// w w w . . htm) . helpguide .org . 
Retrieved 2008-02-29. 

Alzheimer's disease 


External links 

• Alzheimer's Disease Centers (ADCs) ( 

• Alzheimer's Disease Education and Referral (ADEAR) Center ( 

• Alzheimer's Association ( 

• UCSF Memory and Aging Center ( 

2D-FT NMR and Nuclear Magnetic Resonance 

2D-FT Nuclear Magnetic resonance imaging 

(2D-FT NMRI), or Two-dimensional Fourier transform magnetic resonance imaging (NMRI), is primarily a 
non— invasive imaging technique most commonly used in biomedical research and medical radiology/ nuclear 
medicine to visualize structures and functions of the living systems and single cells. For example it can provides 
fairly detailed images of a human body in any selected cross-sectional plane, such as longitudinal, transversal, 
sagital, etc. NMRI provides much greater contrast especially for the different soft tissues of the body than computed 
tomography (CT) as its most sensitive option observes the nuclear spin distribution and dynamics of higly mobile 
molecules that contain the naturally abundant, stable hydrogen isotope H as in plasma water molecules, blood, 
disolved metabolites and fats. This approach makes it most useful in cardiovascular, oncological (cancer), 
neurological (brain), musculoskeletal, and cartilage imaging. Unlike CT, it uses no ionizing radiation, and also 
unlike nuclear imaging it does not employ any radioactive isotopes. Some of the first MRI images reported were 
published in 1973 and the first study performed on a human took place on July 3, 1977. Earlier papers were also 
published by Peter Mansfield in UK (Nobel Laureate in 2007, and R. Damadian in the USA, (together with an 
approved patent for magnetic imaging). Unpublished "high-resolution' (50 micron resolution) images of other living 
systems, such as hydrated wheat grains, were obtained and communicated in UK in 1977-1979, and were 
subsequently confirmed by articles published in Nature. 

NMRI Principle 

Certain nuclei such as H nuclei, or 
Termions' have spin-1/2, because there 
are two spin states, referred to as "up" 
and "down" states. The nuclear 
magnetic resonance absorption 
phenomenon occurs when samples 
containing such nuclear spins are 
placed in a static magnetic field and a 
very short radiofrequency pulse is 
applied with a center, or carrier, 
frequency matching that of the 
transition between the up and down 
states of the spin-1/2 H nuclei that 
were polarized by the static magnetic 

Modern 3 tesla clinical MRI scanner. 

2D-FT NMR and Nuclear Magnetic Resonance Imaging 109 

A number of methods have been devised for combining magnetic field gradients and radiofrequency pulsed 
excitation to obtain an image. Two major maethods involve either 2D -FT or 3D-FT reconstruction from projections, 
somewhat similar to Computed Tomography, with the exception of the image interpretation that in the former case 
must include dynamic and relaxation/contrast enhancement information as well. Other schemes involve building the 
NMR image either point-by-point or line-by-line. Some schemes use instead gradients in the rf field rather than in 
the static magnetic field. The majority of NMR images routinely obtained are either by the Two-Dimensional Fourier 
Transform (2D-FT) technique(with slice selection), or by the Three-Dimensional Fourier Transform (3D— FT) 
techniques that are however much more time consuming at present. 2D-FT NMRI is sometime called in common 
parlance a "spin-warp". An NMR image corresponds to a spectrum consisting of a number of "spatial frequencies' at 
different locations in the sample investigated, or in a patient. A two— dimensional Fourier transformation of such a 
"real" image may be considered as a representation of such "real waves" by a matrix of spatial frequencies known as 
the k— space. We shall see next in some mathematical detail how the 2D-FT computation works to obtain 2D-FT 
NMR images. 

Two-dimensional Fourier transform imaging 

A two-dimensional Fourier transform (2D-FT) is computed numerically or carried out in two stages, both involving 
"standard', one-dimensional Fourier transforms. However, the second stage Fourier transform is not the inverse 
Fourier transform (which would result in the original function that was transformed at the first stage), but a Fourier 
transform in a second variable— which is "shifted' in value— relative to that involved in the result of the first Fourier 
transform. Such 2D-FT analysis is a very powerful method for three-dimensional reconstruction of polymer and 
biopolymer structures by two-dimensional Nuclear Magnetic Resonance (Kurt Wutrich 1986: 2D-FT NMR of 
solutions ) for molecular weights (Mw) of the dissolved polymers up to about 50,000 Mw. For larger biopolymers 
or polymers, more complex methods have been developed to obtain the desired resolution needed for the 
3D-reconstruction of higher molecular structures, e.g. for 900,000 Mw, methods that can also be utilized in vivo. The 
2D-FT method is also widely utilized in optical spectroscopy, such as 2D-FT NIR hyperspectral imaging, or in MRI 
imaging for research and clinical, diagnostic applications in Medicine. A more precise mathematical definition of the 
"double' Fourier transform involved is specified next, and a precise example follows the definition. A 2D-FT, or 
two-dimensional Fourier transform, is a standard Fourier transformation of a function of two variables, f (x\ , x% ) , 
carried first in the first variable X\, followed by the Fourier transform in the second variable a^of the resulting 
function F(s 1; x 2 ) ■ 

Example 1 

A 2D Fourier transformation and phase correction is applied to a set of 2D NMR (FID) signals : s(ti , t2)yid<Hng a 
real 2D-FT NMR "spectrum' (collection of ID FT-NMR spectra) represented by a matrix S whose elements are 

S{u h u 2 ) = Re / / cos(v 1 t 1 )exp(- iv2t2) s(t 1 ,t 2 }dt 1 dt2 

where : ^iand : ^denote the discrete indirect double-quantum and single-quantum(detection) axes, respectively, 
in the 2D NMR experiments. Next, the \emph{covariance matrix} is calculated in the frequency domain according to 
the following equation 

C(is 2 ,b>2) = S S = 2_J[S{y ly v 2 )S{vi, ^2)], with : i/ 2l viking all possible single-quantum 

frequency values and with the summation carried out over all discrete, double quantum frequencies : V\ . 

2D-FT NMR and Nuclear Magnetic Resonance Imaging 1 10 

Example 2 

2D-FT STEM Images (obtained at Cornell University) of electron distributions in a high-temperature cuprate 
superconductor "paracrystal' reveal both the domains (or "location') and the local symmetry of the 'pseudo-gap' in the 
electron-pair correlation band responsible for the high— temperature superconductivity effect (maybe a possible, next 
Nobel if and only if the mathematical physics treatment is also developed to include also such results). So far there 
have been three Nobel prizes awarded for 2D-FT NMR/MRI during 1992-2003, and an additional, earlier Nobel 
prize for 2D-FT of X-ray data fCAT scans'); recently the advanced possibilities of 2D-FT techniques in Chemistry, 
Physiology and Medicine received very significant recognition. 

Brief explanation of NMRI Diagnostic Uses in Pathology 

As an example, a diseased tissue, such as that inside tumors, can be detected because the hydrogen nuclei of 
molecules in different tissues return to their equilibrium spin state at different relaxation rates. By changing the pulse 
delays in the RF pulse sequence employed, and or the pulse sequence itself one may obtain a "relaxation-based 
contrast' between different types of body tissue, such as normal vs. diseased tissue cells for example. Excluded from 
such diagnostic observations by NMRI are all patients with some metal implants, cochlear implants, and all cardiac 
pacemaker patients who cannot undergo any NMRI scan because of the very intense magnetic and rf fields employed 
in NMRI. It is conceivable that future developments may also include along with the NMRI diagnostic treatments 
with special techniques involving applied magnetic fields and very high frequency RF. Already, surgery with special 
tools is being experimented on in the presence of NMR imaging of subjects. Thus, NMRI is used to image almost 
every part of the body, and is especiallyuseful in neurological conditions, disorders of the muscles and joints, for 
evaluating tumors, such as in lung or skin cancers, abnormalities in the heart (especially in children with hereditary 
disorders), blood vessels, CAD and atherosclerosis. 

See also 

• Earth's field NMR (EFNMR) • Magnetic resonance microscopy • Nuclear magnetic resonance (NMR) 

• Medical imaging • Magnetic resonance elastography • Relaxation 

• Robinson oscillator 


[1] Lauterbur, P.C., Nobel Laureate in 2003 (1973). "Image Formation by Induced Local Interactions: Examples of Employing Nuclear Magnetic 

Resonance". Nature 242: 190-1. doi:10.1038/242190a0. 
[2] [ Howstuffworks "How MRI Works" 
[3] http://en.wikipedia.Org/wiki/Nuclear_magnetic_resonance#Nuclear_spin_and_magnets 
[4] http://74.125. 95.1 32/search?q=cache:x60QWq_GVoYJ: www. 8 15/+http://www. 


2D-FT NMR and Nuclear Magnetic Resonance Imaging 111 


• Kurt W\"{u}trich: 1986, NMR of Proteins and Nucleic Acids., J. Wiley and Sons: New York, Chichester, 
Brisbane, Toronto, Singapore. ( Nobel Laureate in 2002 for 2D-FT NMR Studies of Structure and Function of 
Biological Macromolecules ( 

• 2D-FT NMRI Instrument example: A JPG color image of a 2D-FT NMR Imaging "monster' Instrument (http:// 
upload, a/en/b/bf/HWB-NMRv900.jpg). 

• Richard R. Ernst. 1992. Nuclear Magnetic Resonance Fourier Transform (2D-FT) Spectroscopy. Nobel Lecture 
(, on December 9, 1992. 

• Peter Mansfield. 2003. Nobel Laureate in Physiology and Medicine for (2D and 3D) MRI (http://www. 

• D. Benett. 2007. PhD Thesis. Worcester Polytechnic Institute, (lots of 2D-FT images of brain scans.) 

PDF of 2D-FT Imaging Applications to MRI in Medical Research ( 

• Paul Lauterbur. 2003. Nobel Laureate in Physiology and Medicine for (2D and 3D) MRI. ( 

• Jean Jeener. 1971. Two-dimensional Fourier Transform NMR, presented at an Ampere International Summer 
School, Basko Polje, unpublished. A verbatim quote follows from Richard R. Ernst's Nobel Laureate Lecture 
delivered on December 2nd, 1992, "A new approach to measure two-dimensional (2D) spectra." has been 
proposed by Jean Jeener at an Ampere Summer School in Basko Polje, Yugoslavia, 1971 (Jean Jeneer,1971 }). He 
suggested a 2D Fourier transform experiment consisting of two $\pi/2$ pulses with a variable time $ t_l$ between 
the pulses and the time variable $t_2$ measuring the time elapsed after the second pulse as shown in Fig. 6 that 
expands the principles of Fig. 1. Measuring the response $s(t_l,t_2)$ of the two-pulse sequence and 
Fourier-transformation with respect to both time variables produces a two-dimensional spectrum $S(0_1,0_2)$ 
of the desired form. This two-pulse experiment by Jean Jeener is the forefather of a whole class of $2D$ 
experiments that can also easily be expanded to multidimensional spectroscopy. 

• Haacke, E Mark; Brown, Robert F; Thompson, Michael; Venkatesan, Ramesh (1999). Magnetic resonance 
imaging: physical principles and sequence design. New York: J. Wiley & Sons. ISBN 0-471-35128-8. 

External links 

• 3D Animation Movie about MRI Exam ( 

• Interactive Flash Animation on MRI ( - Online Magnetic Resonance Imaging physics and 
technique course 

• International Society for Magnetic Resonance in Medicine ( 

• Danger of objects flying into the scanner ( 

2D-FT NMR and Nuclear Magnetic Resonance Imaging 1 12 

See also 

Earth's field NMR (EFNMR) 
Medical imaging 
Magnetic resonance microscopy 
Magnetic resonance elastography 

Nuclear magnetic resonance (NMR) 
Robinson oscillator 
Rabi cycle 

[Category:Magnetic resonance imagingl ]] [[Category:Medical imaging]] [[Category: 1973 introductions]] {{Link 

FAIeu}} {{Link FAIsk}} {{Link FAIzh}} [[ar:^^^ vUjuiSO UpSu'-kso"'^]] [[cs:Magneticka rezonance]] 

[de:Magnetresonanztomographie]] [[et:Magnetresonantstomograafia]] [[ehMayviiTLKT) TO[xoypaq)La]] 

[es:Resonancia magnetica]] [[eo:Magneta resonanca bildigo]] [[eu:Erresonantzia Magnetiko bidezko Irudigintza]] 

[fail^ljl^]] [[fnlmagerie par resonance magnetique]] [[hr:Magnetna rezonancija]] [[id:MRI]] [[it:Imaging a 

risonanza magnetica]] [[he:TQin nirnn Q^CDTl]] [[lb:Magneitresonanztomographie]] [[hu:MRI]] [[ms:MRI]] 

[nl:MRI-scanner]] [[ja:^fitx1,^B,§lHlf^'ffi]] [[no:Magnetresonanstomografi]] [[nn:Magnetresonanstomografi]] 

[phObrazowanie rezonansu magnetycznego]] [[pt:Ressonancia magnetica]] [[ru:MarHHTHO-pe30HaHCHaa 

TOMorpadpua]] [[simple:Magnetic resonance imaging]] [[sk:Zobrazovanie magnetickou rezonanciou]] [[shSlikanje z 

magnetno resonanco]] [[fi:Magneettikuvaus]] [[sv:Magnetisk resonanstomografi]] 

[[th:rn^ef^njm , wffneiwlmm'umi,3Ji,vian]] [[vkChup cong huftng ta]] [[zh:^fiI^{lj5£H]] 

This article incorporates material by the original author from 2D -FT MR- Imaging and related Nobel awards (http:/ 
/planetphysics. org/ encyclopedia/ 2DFTImaging.html) on PlanetPhysics (, which is 
licensed under the GFDL. 

Cancer screening 113 

Cancer screening 

Cancer screening occurs for many type of cancer including breast, prostate, lung, and colorectal cancer. Cancer 
screening is an attempt to detect unsuspected cancers in an asymptomatic population. 

Screening tests suitable for large numbers of apparently healthy people must be relatively affordable, safe, 
noninvasive procedures with acceptably low rates of false positive results. If signs of cancer are detected, more 
definitive and invasive follow-up tests are performed to confirm the diagnosis. 

Screening for cancer can lead to earlier diagnosis in specific cases. Early diagnosis may lead to higher rates of 
successful treatment and extended life. However, it may also falsely appear to prolong the lead time to death through 
lead time bias or length time bias. 

Types of screening 

A number of different screening tests have been developed for different malignancies. Breast cancer screening can 
be done by breast self-examination, though this approach was discredited by a 2005 study in over 300,000 Chinese 
women. Screening for breast cancer with mammograms has been shown to reduce the average stage of diagnosis of 
breast cancer in a population. Stage of diagnosis in a country has been shown to decrease within ten years of 
introduction of mammographic screening programs. Colorectal cancer can be detected through fecal occult blood 
testing [1] and colonoscopy, which reduces both colon cancer incidence and mortality, presumably through the 
detection and removal of pre-malignant polyps. Similarly, cervical cytology testing (using the Pap smear) leads to 
the identification and excision of precancerous lesions. Over time, such testing has been followed by a dramatic 
reduction of cervical cancer incidence and mortality. Testicular self-examination is recommended for men beginning 
at the age of 15 years to detect testicular cancer. Prostate cancer can be screened using a digital rectal exam along 
with prostate specific antigen (PSA) blood testing, though some authorities (such as the US Preventive Services Task 
Force) recommend against routinely screening all men. 

Risks and benefits 

Screening for cancer is controversial in cases when it is not yet known if the test actually saves lives. Screening can 
lead to substantial false positive result and subsequent invasive procedures. The controversy arises when it is not 
clear if the benefits of screening outweigh the risks of follow-up diagnostic tests and cancer treatments. Cancer 
screening is not indicated unless life expectancy is greater than five years and the benefit is uncertain over the age of 

70. [3] 

Prostate cancer 

When screening for prostate cancer, the PSA test may detect small cancers that would never become life threatening, 
but once detected will lead to treatment. This situation, called overdiagnosis, puts men at risk for complications from 
unnecessary treatment such as surgery or radiation. Follow up procedures used to diagnose prostate cancer (prostate 
biopsy) may cause side effects, including bleeding and infection. Prostate cancer treatment may cause incontinence 
(inability to control urine flow) and erectile dysfunction (erections inadequate for intercourse). 

Cancer screening 114 

Breast cancer 

Similarly, for breast cancer, there have recently been criticisms that breast screening programs in some countries 
cause more problems than they solve. This is because screening of women in the general population will result in a 
large number of women with false positive results which require extensive follow-up investigations to exclude 
cancer, leading to having a high number-to-treat (or number-to-screen) to prevent or catch a single case of breast 
cancer early. 

Cervical cancer 

Cervical cancer screening via the Pap smear has the best cost-benefit profile of all the forms of cancer screening 
from a public health perspective as, being largely caused by a virus, it has clear risk factors (sexual contact), and the 
natural progression of cervical cancer is that it normally spreads slowly over a number of years therefore giving 
more time for the screening program to catch it early. Moreover, the test itself is easy to perform and relatively 

Medical imaging 

Use of medical imaging to search for cancer in people without clear symptoms is similarly marred with problems. 
There is a significant risk of detection of what has been recently called an incidentaloma - a benign lesion that may 
be interpreted as a malignancy and be subjected to potentially dangerous investigations. Recent studies of CT 
scan-based screening for lung cancer in smokers have had equivocal results, and systematic screening is not 
recommended as of July 2007. Randomized clinical trials of plain-film chest X-rays to screen for lung cancer in 
smokers have shown no benefit for this approach. 

See also 

• Overdiagnosis 

• Type I and type II errors 

• Epidemiology of cancer 



[2] Croswell JM, Kramer BS, Kreimer AR, et al. (2009). "Cumulative incidence of false-positive results in repeated, multimodal cancer 

screening" ( Ann Fam Med 7 (3): 212—22. 

doi:10.1370/afm.942. PMID 19433838. PMC 2682972. 
[3] Spalding MC, Sebesta SC (July 2008). "Geriatric screening and preventive care". Am Fam Physician 78 (2): 206-15. PMID 18697503. 

• Smith, Robert A; Vilma Cokkinides, and Harmon J. Eyre (2007). "Cancer Screening in the United States, 2007 - 
A Review of Current Guidelines, Practices, and Prospects ," ( 
full/57/2/90). CA: A Cancer Journal for Clinicians (American Cancer Society) 57: 90—104. 
doi:10.3322/canjclin.57.2.90. ISSN 1542-4863. 

• Aziz, Khalid; George Y. Wu (2002). Cancer screening: a practical guide for physicians (http://www. springer. 
com/humana+press/book/978-0-89603-865-3). Current Clinical Practice. Humana Press, pp. 324. 

ISBN 0896038653. 

Cancer screening 


External links 

• NHS cancer screening programmes ( 

• Screening for cancer ( asp?page=106), Cancer Research UK 

• Cancer screening overview ( 
healthprofessional), National Cancer Institute 

• Cancer Screening (, 

• ColonCancerCheck ( including fact sheets in 24 languages at Ontario 
Ministry of Health and Long-Term Care 

Ultrasound Imaging 

Ultrasound is cyclic sound pressure with a frequency greater than the upper limit of human hearing. Although this 
limit varies from person to person, it is approximately 20 kilohertz (20,000 hertz) in healthy, young adults and thus, 
20 kHz serves as a useful lower limit in describing ultrasound. The production of ultrasound is used in many 
different fields, typically to penetrate a medium and measure the reflection signature or supply focused energy. The 
reflection signature can reveal details about the inner structure of the medium, a property also used by animals such 
as bats for hunting. The most well known application of ultrasound is its use in sonography to produce pictures of 
fetuses in the human womb. There are a vast number of other applications as well 


Medical and Destructive 

Low bass notes Animals and Chemistry 

Diagnostic and NDE 

20Hz| 20kHz 



Infrasound Acoustic 

Approximate frequency ranges corresponding to ultrasound, with rough guide of some 


Ability to hear ultrasound 

The upper frequency limit in humans 
(approximately 20 kHz) is due to 
limitations of the middle ear, which 
acts as a low-pass filter. Ultrasonic 
hearing can occur if ultrasound is fed 
directly into the skull bone and reaches 
the cochlea through bone conduction 
without passing through the middle 

It is a fact in psychoacoustics that 

children can hear some high-pitched 

sounds that older adults cannot hear, 

because in humans the upper limit 

pitch of hearing tends to become lower 

with age. A cell phone company has 

used this to create ring signals 

supposedly only able to be heard by 

younger humans; but many older people are able to hear it, which may be due to the considerable variation of 

age-related deterioration in the upper hearing threshold. 

A fetus in its mother's womb, viewed in a 
sonogram (brightness scan) 

Ultrasound Imaging 


Some animals — such as dogs, cats, dolphins, bats, and mice — have 
an upper frequency limit that is greater than that of the human ear and 
thus can hear ultrasound, which is how a dog whistle works. 

An ultrasound examination in East Germany, 

Sonogram of a fetus at 14 weeks (profile) 

Diagnostic sonography 

Medical sonography (ultrasonography) is an ultrasound-based 
diagnostic medical imaging technique used to visualize muscles, 
tendons, and many internal organs, to capture their size, structure and 
any pathological lesions with real time tomographic images. 
Ultrasound has been used by radiologists and sonographers to image 
the human body for at least 50 years and has become one of the most 
widely used diagnostic tools in modern medicine. The technology is 
relatively inexpensive and portable, especially when compared with 
other techniques, such as magnetic resonance imaging (MRI) and 
computed tomography (CT). Ultrasound is also used to visualize 
fetuses during routine and emergency prenatal care. Such diagnostic 
applications used during pregnancy are referred to as obstetric 

As currently applied in the medical field, properly performed 
ultrasound poses no known risks to the patient. Sonography is 
generally described as a "safe test" because it does not use mutagenic 
ionizing radiation, which can pose hazards such as chromosome 
breakage and cancer development. However, ultrasonic energy has two 
potential physiological effects: it enhances inflammatory response; and 
it can heat soft tissue. Ultrasound energy produces a mechanical 
pressure wave through soft tissue. This pressure wave may cause 
microscopic bubbles in living tissues and distortion of the cell 
membrane, influencing ion fluxes and intracellular activity. When 
ultrasound enters the body, it causes molecular friction and heats the 
tissues slightly. This effect is typically very minor as normal tissue 
perfusion dissipates most of the heat, but with high intensity, it can 
also cause small pockets of gas in body fluids or tissues to expand and 
contract/collapse in a phenomenon called cavitation; however this is 
not known to occur at diagnostic power levels used by modern 
diagnostic ultrasound units. 

In 2008, the AIUM published a 130-page report titled "American Institute of Ultrasound in Medicine Consensus 
Report on Potential Bioeffects of Diagnostic Ultrasound" stating that there are indeed some potential risks to 

Head of a fetus, aged 29 weeks, in a "3D 

Ultrasound Imaging 117 

administering ultrasound tests, which include "postnatal thermal effects, fetal thermal effects, postnatal mechanical 
effects, fetal mechanical effects, and bioeffects considerations for ultrasound contrast agents." The long-term 
effects of tissue heating and cavitation have shown decreases in the size of red blood cells in cattle when exposed to 
intensities higher than diagnostic levels. However, long term effects due to ultrasound exposure at diagnostic 


intensity is still unknown. 

There are several studies that indicate the harmful side effects on animal fetuses associated with the use of 
sonography on pregnant mammals. A Yale study in 2006 suggested exposure to ultrasound affects fetal brain 
development in mice. A typical fetal scan, including evaluation for fetal malformations, typically takes 10—30 
minutes. The study showed that rodent brain cells failed to migrate to their proper positions and remained scattered 
in incorrect parts of the brain. This misplacement of brain cells during their development is linked to disorders 
ranging from "mental retardation and childhood epilepsy to developmental dyslexia, autism spectrum disorders and 
schizophrenia." However, this effect was only detectable after 30 minutes of continuous scanning. No link has yet 
been made between the test results on animals such as mice and the possible effects on humans. Although the 
possibility exists that biological effects on humans may be identified in the future, currently most doctors feel that 
based on available information the benefits to patients outweigh the risks. Also the ALARA (As Low As 
Reasonably Achievable) principle has been advocated for an ultrasound examination; that is keeping the scanning 
time and power settings as low as possible but consistent with diagnostic imaging; and that is the principle by which 
non-medical uses which by definition are not necessary are actively discouraged. 

Obstetric ultrasound can be used to identify many conditions that would be harmful to the mother and the baby. 
Many health care professionals consider the risk of leaving these conditions undiagnosed to be much greater than the 
very small risk, if any, associated with undergoing an ultrasound scan. According to Cochrane Review, routine 
ultrasound in early pregnancy (less than 24 weeks) appears to enable better gestational age assessment, earlier 
detection of multiple pregnancies and earlier detection of clinically unsuspected fetal malformation at a time when 
termination of pregnancy is possible. 

Sonography is used routinely in obstetric appointments during pregnancy, but the FDA discourages its use for 

non-medical purposes such as fetal keepsake videos and photos, even though it is the same technology used in 


Obstetric ultrasound is primarily used to: 

Date the pregnancy (gestational age) 

Confirm fetal viability 

Determine location of fetus, intrauterine vs ectopic 

Check the location of the placenta in relation to the cervix 

Check for the number of fetuses (multiple pregnancy) 

Check for major physical abnormalities. 

Assess fetal growth (for evidence of intrauterine growth restriction (IUGR)) 

Check for fetal movement and heartbeat. 

Determine the sex of the baby 

Unfortunately, results are occasionally wrong, producing a false positive (the Cochrane Collaboration is a relevant 
effort to improve the reliability of health care trials). False detection may result in patients being warned of birth 
defects when no such defect exists. Sex determination is only accurate after 12 weeks gestation. When balancing risk 
and reward, there are recommendations to avoid the use of routine ultrasound for low risk pregnancies. In many 
countries ultrasound is used routinely in the management of all pregnancies. 

According to the European Committee of Medical Ultrasound Safety (ECMUS) "Ultrasonic examinations should 
only be performed by competent personnel who are trained and updated in safety matters. Ultrasound produces 
heating, pressure changes and mechanical disturbances in tissue. Diagnostic levels of ultrasound can produce 
temperature rises that are hazardous to sensitive organs and the embryo/fetus. Biological effects of non-thermal 

Ultrasound Imaging 118 

origin have been reported in animals but, to date, no such effects have been demonstrated in humans, except when a 


microbubble contrast agent is present." Nonetheless, care should be taken to use low power settings and avoid 
pulsed wave scanning of the fetal brain unless specifically indicated in high risk pregnancies. 

It should be noted that obstetrics is not the only use of ultrasound. Soft tissue imaging of many other parts of the 
body is conducted with ultrasound. Other scans routinely conducted are cardiac, renal, liver and gallbladder 
(hepatic). Other common applications include musculo-skeletal imaging of muscles, ligaments and tendons, 
ophthalmic ultrasound (eye) scans and superficial structures such as testicle, thyroid, salivary glands and lymph 
nodes. Because of the real time nature of ultrasound, it is often used to guide interventional procedures such as fine 
needle aspiration FNA or biopsy of masses for cytology or histology testing in the breast, thyroid, liver, kidney, 
lymph nodes, muscles and joints. 

Ultrasound scanners have different Doppler-techniques to visualize arteries and veins. The most common is colour 
doppler or power doppler, but also other techniques like b-flow are used to show bloodflow in an organ. By using 
pulsed wave doppler or continuous wave doppler bloodflow velocities can be calculated. 

Figures released for the period 2005-2006 by UK Government (Department of Health) show that non-obstetric 
ultrasound examinations constituted more than 65% of the total number of ultrasound scans conducted. 

Ultrasound is also increasingly being used in trauma and first aid cases, with emergency ultrasound becoming a 
staple of most EMT response teams. 

Biomedical ultrasonic applications 


Ultrasound also has therapeutic applications, which can be highly beneficial when used with dosage precautions: 

• According to Radiologylnfo, ultrasounds are useful in the detection of pelvic abnormalities and can involve 
techniques known as abdominal (transabdominal) ultrasound, vaginal (transvaginal or endovaginal) ultrasound in 
women, and also rectal (transrectal) ultrasound in men. 

• Focused high-energy ultrasound pulses can be used to break calculi such as kidney stones and gallstones into 
fragments small enough to be passed from the body without undue difficulty, a process known as lithotripsy. 

• Treating benign and malignant tumors and other disorders via a process known as high intensity focused 
ultrasound (HIFU), also called focused ultrasound surgery (FUS). In this procedure, a generally lower frequencies 
than medical diagnostic ultrasound is used (250—2000 kHz), but significantly higher time-averaged intensities. 
The treatment is often guided by magnetic resonance imaging (MRI) — this is called Magnetic resonance- guided 
focused ultrasound (MRgFUS). Delivering chemotherapy to brain cancer cells and various drugs to other tissues 
is called acoustic targeted drug delivery (ATDD). These procedures generally use high frequency ultrasound 
(1-10 MHz) and a range of intensities (0-20 watts/cm ). The acoustic energy is focused on the tissue of interest to 

shows much less concentration of drug without ATDD 
Brain Mimicking Phantom Equine Brain 

n 7i n ri 
agitate its matrix and make it more permeable for therapeutic drugs. 

Therapeutic ultrasound, a technique that uses more powerful MMfflflHIIIIII 

ultrasound sources to generate cellular effects in soft tissue has 

fallen out of favor as research has shown a lack of efficacy and a 

lack of scientific basis for proposed biophysical effects. 

Ultrasound has been used in cancer treatment. 

Cleaning teeth in dental hygiene. 

Focused ultrasound sources may be used for cataract treatment by 

■ , -J7- .■ Enhanced drue uptake usine acoustic targeted 

phacoemulsification. 5 v B B 

drug delivery (ATDD). 

Additional physiological effects of low-intensity ultrasound have 

recently been discovered, e.g. the ability to stimulate bone-growth and its potential to disrupt the blood-brain 
barrier for drug delivery. 

Ultrasound Imaging 


Ultrasound is essential to the procedures of ultrasound-guided sclerotherapy and endovenous laser treatment for 

the non-surgical treatment of varicose veins. 

Ultrasound-assisted lipectomy is lipectomy assisted by ultrasound. Liposuction can also be assisted by ultrasound. 

Doppler ultrasound is being tested for use in aiding tissue plasminogen activator treatment in stroke sufferers in 

the procedure called ultrasound-enhanced systemic thrombolysis. 

Low intensity pulsed ultrasound is used for therapeutic tooth and bone regeneration. 

Ultrasound can also be used for elastography. This can be useful in medical diagnoses, as elasticity can discern 

healthy from unhealthy tissue for specific organs/growths. In some cases unhealthy tissue may have a lower 

system Q, meaning that the system acts more like a large heavy spring as compared to higher values of system Q 

(healthy tissue) that respond to higher forcing frequencies. Ultrasonic elastography is different from conventional 

ultrasound, as a transceiver (pair) and a transmitter are used instead of only a transceiver. One transducer acts as 

both the transmitter and receiver to image the region of interest over time. The extra transmitter is a very low 

frequency transmitter, and perturbs the system so the unhealthy tissue oscillates at a low frequency and the 

healthy tissue does not. The transceiver, which operates at a high frequency (typically MHz) then measures the 

displacement of the unhealthy tissue (oscillating at a much lower frequency). The movement of the slowly 

oscillating tissue is used to determine the elasticity of the material, which can then be used to distinguish healthy 

tissue from the unhealthy tissue. 

Ultrasound has been shown to act synergistically with antibiotics in bacterial cell killing. 

Ultrasound has been postulated to allow thicker eukaryotic cell tissue cultures by promoting nutrient 


Ultrasound in the low MHz range in the form of standing waves is an emerging tool for contactless separation, 

concentration and manipulation of microparticles and biological cells, a method referred to as acoustophoresis. 

The basis is the acoustic radiation force, a non-linear effect which causes particles to be attracted to either the 

nodes or anti-nodes of the standing wave depending on the acoustic contrast factor, which is a function of the 

sound velocities and densities of the particle and of the medium in which the particle is immersed. 

Ultrasound laboratory research based on clinically diagnostic systems is a popular way of making use of a 

real-time, lower cost (in comparison to MRI and CT) imaging modality for study of biomedical applications and 

image processing techniques. The ultrasound research interface is a tool that bridges the gap between useful 

laboratory equipment and a clinical device, and can be used to collect raw data for external or real-time analysis 

using special algorithms and protocols. 

Industrial ultrasound 

Ultrasonic testing is a type of nondestructive testing commonly used to 
find flaws in materials and to measure the thickness of objects. 
Frequencies of 2 to 10 MHz are common but for special purposes other 
frequencies are used. Inspection may be manual or automated and is an 
essential part of modern manufacturing processes. Most metals can be 
inspected as well as plastics and aerospace composites. Lower 
frequency ultrasound (50—500 kHz) can also be used to inspect less 
dense materials such as wood, concrete and cement. 

Ultrasound can also be used for heat transfer in liquids. Researchers 
recently employed ultrasound in dry corn milling plant to enhance 
ethanol production 


Non-destructive testing of a swing shaft showing 
spline cracking 

Ultrasound Imaging 120 

Ultrasonic manipulation and characterization of particles 

A researcher at the Industrial Materials Research Institute, Alessandro Malutta, devised an experiment that 

demonstrated the trapping action of ultrasonic standing waves on wood pulp fibers diluted in water and their parallel 

orienting into the equidistant pressure planes. The time to orient the fibers in equidistant planes is measured with 

a laser and an electro-optical sensor. This could provide the paper industry a quick on-line fiber size measurement 

system. A somewhat different implementation was demonstrated at Penn State University using a microchip which 

generated a pair of perpendicular standing surface acoustic waves allowing to position particles equidistant to each 

other on a grid. This experiment, called "acoustic tweezers", can be used for applications in material sciences, 

biology, physics, chemistry and nano technology. 

Ultrasonic cleaning 

Ultrasonic cleaners, sometimes mistakenly called supersonic cleaners, are used at frequencies from 20 to 40 kHz for 
jewellery, lenses and other optical parts, watches, dental instruments, surgical instruments, diving regulators and 
industrial parts. An ultrasonic cleaner works mostly by energy released from the collapse of millions of microscopic 
cavitations near the dirty surface. The bubbles made by cavitation collapse forming tiny jets directed at the surface. 

Ultrasonic disintegration 

Similar to ultrasonic cleaning, biological cells including bacteria can be disintegrated. High power ultrasound 
produces cavitation that facilitates particle disintegration or reactions. This has uses in biological science for 
analytical or chemical purposes (Sonication and Sonoporation) and in killing bacteria in sewage. Dr. Samir Khanal 
of Iowa State University employed high power ultrasound to disintegrate corn slurry to enhance liquefaction and 
saccharification for higher ethanol yield in dry corn milling plants. Similar to these findings was Dr. Oleg 

T271 T2R1 

Kozyuk able to improve ethanol yield with hydrodynamic cavitation. 

Ultrasonic humidifier 

The ultrasonic humidifier, one type of nebulizer (a device that creates a very fine spray), is a popular type of 

humidifier. It works by vibrating a metal plate at ultrasonic frequencies to nebulize (sometimes incorrectly called 

"atomize") the water. Because the water is not heated for evaporation, it produces a cool mist. The ultrasonic 

pressure waves nebulize not only the water but also materials in the water including calcium, other minerals, viruses, 

fungi, bacteria, and other impurities. Illness caused by impurities that reside in a humidifier's reservoir fall under 

the heading of "Humidifier Fever". 

Ultrasound Identification (USID) 

Ultrasound Identification (USID) is a Real Time Locating System (RTLS) or Indoor Positioning System (IPS) 
technology used to automatically track and identify the location of objects in real time using simple, inexpensive 
nodes (badges/tags) attached to or embedded in objects and devices, which then transmit an ultrasound signal to 
communicate their location to microphone sensors. 

Ultrasonic welding 

In ultrasonic welding of plastics, high frequency (15 kHz to 40 kHz ) low amplitude vibration is used to create heat 
by way of friction between the materials to be joined. The interface of the two parts is specially designed to 
concentrate the energy for the maximum weld strength. 

Ultrasound Imaging 


Ultrasound and animals 

Bats use a variety of ultrasonic ranging 
(echolocation) techniques to detect their 
prey. They can detect frequencies as high as 
100 kHz, although there is some 
disagreement on the upper limit. 


There is evidence that ultrasound in the 
range emitted by bats causes flying moths to 
make evasive manoeuvres because bats eat 
moths. Ultrasonic frequencies trigger a 
reflex action in the noctuid moth that cause 
it to drop a few inches in its flight to evade 



*3 ^^^ 




Bats use ultrasounds to move in the darkness. 

[32] [331 

Tiger moths also emit clicks which jam bats' echolocation. 

Ultrasound generator/speaker systems are sold with claims that they frighten away rodents and insects, but there is 
no scientific evidence that the devices work. 


Dogs can hear sound at higher frequencies than humans can. A dog whistle exploits this by emitting a high 
frequency sound to call to a dog. Many dog whistles emit sound in the upper audible range of humans, but some, 
such as the silent whistle, emit ultrasound at a frequency in the range 18—22 kHz. 

Dolphins and whales 

It is well known that some whales can hear ultrasound and have their own natural sonar system. Some whales use the 

ultrasound as a hunting tool (for both detection of prey and as an attack) 



Several types of fish can detect ultrasound. In the order Clupeiformes, members of the subfamily Alosinae (shad), 
have been shown to be able to detect sounds up to 180 kHz, while the other subfamilies (e.g. herrings) can hear only 
up to 4 kHz 



Diagnostic ultrasound is used externally in the equine for evaluation of soft tissue and tendon injuries, and internally 

in particular for reproductive work - evaluation of the reproductive tract of the mare and pregnancy detection . It 

may also be used in an external manner in stallions for evaluation of testicular condition and diameter as well as 

internally for reproductive evaluation (deferent duct etc.). 

Ultrasound Imaging 



Starting at the turn of the century, ultrasound technology began to be used by the beef cattle industry to improve 
animal health and the yield of cattle operations. Ultrasound is used to evaluate fat thickness, rib eye area, and 
intramuscular fat in living animals. It is also used to evaluate the health and characteristics of unborn calves. 

Ultrasound technology provides a means for cattle producers to obtain information that can be used to improve the 
breeding and husbandry of cattle. The technology can be expensive, and it requires a substantial time commitment 

for continuous data collection and operator training. Nevertheless, this technology has proven useful in managing 

and running a cattle breeding operation. 


Power ultrasound in the 20— 100 kHz range is used in chemistry. The ultrasound does not interact directly with 
molecules to induce the chemical change, as its typical wavelength (in the millimeter range) is too long compared to 
the molecules. Instead: 

• It causes cavitation which causes local extremes of temperature and pressure in the liquid where the reaction 

• It breaks up solids and removes passivating layers of inert material to give a larger surface area for the reaction to 
occur over. 

Both of these make the reaction faster. In 2008, Atul Kumar reported synthesis of Hantzsch esters and 

polyhydroquinoline derivatives via multi-component reaction protocol in aqueous micelles using ultrasound. 

• It is used in extraction, using different frequencies. 

Receiver f 

\l Object 

Ultrasonic range finding 

A common use of ultrasound is in 
range finding; this use is also called 
SONAR, (sound navigation and 
ranging). This works similarly to 
RADAR (radio detection and ranging): 
An ultrasonic pulse is generated in a 
particular direction. If there is an 
object in the path of this pulse, part or 
all of the pulse will be reflected back 
to the transmitter as an echo and can be 
detected through the receiver path. By 
measuring the difference in time 
between the pulse being transmitted 
and the echo being received, it is possible to determine how far away the object is. 

The measured travel time of SONAR pulses in water is strongly dependent on the temperature and the salinity of the 
water. Ultrasonic ranging is also applied for measurement in air and for short distances. Such method is capable for 
easily and rapidly measuring the layout of rooms. 

Although range finding underwater is performed at both sub-audible and audible frequencies for great distances (1 to 
several kilometers), ultrasonic range finding is used when distances are shorter and the accuracy of the distance 
measurement is desired to be finer. Ultrasonic measurements may be limited through barrier layers with large 
salinity, temperature or vortex differentials. Ranging in water varies from about hundreds to thousands of meters, but 
can be performed with centimeters to meters accuracy. 

distance r 
Principle of an active sonar 

Ultrasound Imaging 123 

Other uses 

Ultrasound when applied in specific configurations can produce short bursts of light in an exotic phenomenon known 
as sonoluminescence. This phenomenon is being investigated partly because of the possibility of bubble fusion (a 
nuclear fusion reaction hypothesized to occur during sonoluminescence). 


Researchers have successfully used ultrasound to regenerate dental material 

Ultrasound is used when characterizing particulates through the technique of ultrasound attenuation spectroscopy or 
by observing electroacoustic phenomena. 

In rheology, an acoustic rheometer relies on the principle of ultrasound. In fluid mechanics, fluid flow can be 
measured using an ultrasound flow meter. 

Ultrasound also plays a role in Sonic weaponry. 

High and ultra high ultrasound waves are used in Acoustic microscopy 

Audio can be propagated by modulated ultrasound. 

Nonlinear propagation effects 

Because of their high amplitude to wavelength ratio, ultrasonic waves commonly display nonlinear propagation. 


Occupational exposure to ultrasound in excess of 120 dB may lead to hearing loss. Exposure in excess of 155 dB 
may produce heating effects that are harmful to the human body, and it has been calculated that exposures above 180 
dB may lead to death. 

See also 


Bat detector 

Infrasound — sound at extremely low frequencies 


Medical ultrasonography 

Picosecond Ultrasonics 


Sound from ultrasound (also known as Hypersonic sound) 



Zone sonography technology 

Ultrasound Imaging 124 


[I] Novelline, Robert (1997). Squire's Fundamentals of Radiology (5th ed.). Harvard University Press, pp. 34—35. ISBN 0674833392. 

[2] Takeda, S.; Morioka, I.; Miyashita, K.; Okumura, A.; Yoshida, Y.; Matsumoto, K. (1992). "Age variation in the upper limit of hearing" (http:/ 

/www. 112475/). European Journal of Applied Physiology 65 (5): 403—408. 

doi:10.1007/BF00243505. . Retrieved 2008-11-17. 
[3] "A Ring Tone Meant to Fall on Deaf Ears" ( 

en=2a80dl50770df0df&ei=5090&partner=rssuserland&emc=rss<br />) (New York Times article) 
[4] Hangiandreou, N. J. (2003). "Physics Tutorial for Residents: Topics in US: B-mode US: Basic Concepts and New Technology - 

Hangiandreou". Radiographics 23 (4): 1019. doi:10.1148/rg.234035034. 
[5] Bioeffects Committee of the American Institute of Ultrasound in Medicine (2008-04-01). "American Institute of Ultrasound in Medicine 

Consensus Report on Potential Bioeffects of Diagnostic Ultrasound: Executive Summary" ( 

abstract/27/4/503). Journal of Ultrasound in Medicine (American Institute of Ultrasound in Medicine) 27 (4): 503-5 15. PMID 18359906. . 
[6] AIUM Consensus report (2008) ( 

[7] Soetanto, Kawan; Kobayashi, Masahiro; Okujima, Motoyoshi (1998). "Fundamental Examination of Cattle Red Blood Cells Damage with 

Ultrasound Exposure Microscopic System (UEMS)" ( Japanese Journal of Applied Physics 37: 

3070. doi:10.1143/JJAP.37.3070. . 
[8] FDA Radiological Health - Ultrasound Imaging ( 
[9] "Ultrasonographic Screening for Fetal Malformations" ( 
[10] Patient Information - Ultrasound Safety ( 

[II] "Ultrasound for fetal assessment in early pregnancy" ( . 
[12] ( 

[13] Clinical Safety Statements ( 

[14] Essentials of Medical Ultrasound: A Practical Introduction to the Principles, Techniques and Biomedical Applications, edited by M. H. 

Rapacholi, Humana Press 1982 
[15] "Ultrasound - Pelvis" ( . 
[16] Lewis Jr., George K.; Olbricht, Willam L.; Lewis, George (2008). Acoustic enhanced Evans blue dye perfusion in neurological tissues. 2. 

pp. 020001. doi:10.1121/1.2890703. 
[17] Lewis, George K.; Olbricht, William (2007). A phantom feasibility study of acoustic enhanced drug delivery to neurological tissue, pp. 67. 

doi: 10.1 109/LSSA.2007 .4400886. 
[18] "Acoustics and brain cancer" ( . 
[19] Valma J Robertson, Kerry G Baker (2001). "A Review of Therapeutic Ultrasound: Effectiveness Studies" ( 

content/short/8 1/7/1339). Physical Therapy 81 (7): 1339. PMID 11444997. . 
[20] Kerry G Baker, et al (2001). "A Review of Therapeutic Ultrasound: Biophysical Effects". Physical Therapy 81 (7): 1351. PMID 1 1444998. 
[21] Carmen, JC; Roeder, BL; Nelson, JL; Beckstead, BL; Runyan, CM; Schaalje, GB; Robison, RA; Pitt, WG (2004). "Ultrasonically enhanced 

vancomycin activity against Staphylococcus epidermidis biofilms in vivo." ( 

fcgi?tool=pmcentrez&artid=1361255). Journal of biomaterials applications 18 (4): 237-45. doi: 10. 1177/0885328204040540. 

PMID 15070512. PMC 1361255. 
[22] Pitt WG, Ross SA (2003). "Ultrasound increases the rate of bacterial cell growth" ( 

fcgi?tool=pmcentrez&artid=1361254). Biotechnol Prog. 19 (3): 1038^4. doi:10.1021/bp0340685. PMID 12790676. PMC 1361254. 
[23] Using Infrared To See If You're Lit ( 
[24] Dion, J. L.; Malutta, A.; Cielo, P., "Ultrasonic inspection of fiber suspensions", The Journal of the Acoustical Society of America, Volume 

72, Issue 5, November 1982, pp.1524-1526. 
[25] (Hans) Van Leeuwen, J; Akin, Beril; Khanal, Samir Kumar; Sung, Shihwu; Grewell, David; (Hans) Van Leeuwen, J (2006). "Ultrasound 

pre-treatment of waste activated sludge" ( Water Science & Technology: 

Water Supply 6: 35. doi: 10.2166/ws.2006.962. . 
[26] U Neis, K Nickel and A Tiehm (2000). "Enhancement of anaerobic sludge digestion by ultrasonic disintegration" (http://www.iwaponline. 

com/wst/04209/wst042090073.htm). Water Science & Technology 42 (9): 73. . 
[27] Ethanol Producer Magazine; Tiny Bubbles to Make You Happy ( 

[28] Oleg Kozyuk; Arisdyne Systems Inc. (; US patent US 7,667,082 B2; Apparatus and Method for Increasing 

Alcohol Yield from Grain 
[29] Oie, S; Masumoto, N; Hironaga, K; Koshiro, A; Kamiya, A (1992). "Microbial contamination by ultrasonic humidifier". Microbios 72 

(292-293): 161-6. PMID 1488018. 
[30] Cancel, Juan (1998). "Frequency of Bat Sonar" ( The Physics Factbook. . 
[31] Jones, G; D A Waters (2000). "Moth hearing in response to bat echolocation calls manipulated independently in time and frequency." (http:/ 

/ Proceedings of the Royal Society B Biological Sciences 

267 (1453): 1627. doi:10.1098/rspb.2000.1188. PMID 11467425. PMC 1690724. 

Ultrasound Imaging 125 

[32] Matt Kaplan (July 17, 2009). "Moths Jam Bat Sonar, Throw the Predators Off Course" ( 

07/090717-moths-jam-bat-sonar.html). National Geographic News. . 
[33] Some Moths Escape Bats By Jamming Sonar (http://www. php?storyld=106733884) (video) 
[34] Hui, Yiu H. (2003). Food plant sanitation ( 


q=ultrasonic&f=false). CRC Press, p. 289. ISBN 0824707931. . 
[35] NAS; National Research Council (U.S.). Committee on Plant and Animal Pests. Subcommittee on Vertebrate pests, Robert A. McCabe, 

National Academy of Sciences (U.S.) (1970). Vertebrate pests: problems and control; Volume 5 of Principles of plant and animal pest 

control, National Research Council (U.S.). Committee on Plant and Animal Pests; Issue 1697 of Publication (National Research Council 

(U.S.))) (http://books. ?id=uDorAAAAYAAJ&pg=PA92&dq=mouse+ultrasonic+repellent&hl=en& 


f=false). National Academies, p. 92. . 
[36] ASTM; Kathleen A. Fagerstone, Richard D. Curnow, ASTM Committee E-35 on Pesticides, ASTM Committee E-35 on Pesticides. 

Subcommittee E35. 17 on Vertebrate Pest Control Agents (1989). Vertebrate pest control and management materials: 6th volume; Volume 

1055 of ASTM special technical publication (http://books. google. com/books ?id=vYGZs2A7S_IC&printsec=frontcover&dq=mouse+ 


ved=0CFYQ6AEwBjgK#v=onepage&q=ultrasonic&f=false). ASTM International, p. 8. ISBN 0803112815. . 
[37] Voices in the Sea ( 

[38] Mann DA, et al. (2001). "Ultrasound detection by clupeiform fishes". JASA 109 (6): 3048-3054. doi:10.1 121/1.1368406. 
[39] Ultrasound Characteristics of the Uterus in the Cycling Mare and their Correlation with Steroid Hormones and Timing of Ovulation (http:// 
[40] McKinnon and Voss "Equine Reproduction" (Lea & Febiger; 1993) 
[41] Bennett, David (May 19, 2005). "Subiaco Abbey's Angus herd" ( Delta Farm Press. Archived 

from the original ( on February 27, 2010. . Retrieved February 27, 2010. 
[42] Wagner, Wayne. "Extension Effort in Beef Cattle Breeding & Selection" ( West Virginia 

University Extension Service. Archived from the original ( on February 27, 2010. 

. Retrieved February 27, 2010. 
[43] Kumar, Atul, Ram Awatar Maurya. Efficient Synthesis of Hantzsch Esters and Polyhydroquinoline Derivatives in Aqueous Micelles (http:// Synlett 2008. pp 883—885. 
[44] Toothsome research may hold key to repairing dental disasters - ExpressNews - University of Alberta (http://www.expressnews.ualberta. 

ca/ article. cfm?id=769 1) 
[45] Part II, industrial and commercial applications (1991). Guidelines for the Safe Use of Ultrasound Part II - Industrial & Commercial 

Applications - Safety Code 24 ( 

Health Canada. ISBN 0-660-13741-0. . 

Further reading 

• Kundu, Tribikram. Ultrasonic nondestructive evaluation: engineering and biological material characterization. 
Boca Raton, FL: CRC Press, c2004. ISBN 0-8493-1462-3. 

External links 

• Guidelines for the Safe Use of Ultrasound ( 
safety-code_24-securite/health-sante-eng.php): valuable insight on the boundary conditions tending towards 
abuse of ultrasound. 

• High-frequency hearing risk for operators of industrial ultrasonic devices ( 

• Safety Issues in Fetal Ultrasound ( 

• Damage to red blood cells induced by acoustic cavitation(ultrasound) (http://cat.imst. fr/?aModele=afficheN& 

Medical ultrasonography 


Medical ultrasonography 

Diagnostic sonography (ultrasonography) is an ultrasound-based diagnostic imaging technique used to visualize 
subcutaneous body structures including tendons, muscles, joints, vessels and internal organs for possible pathology 
or lesions. Obstetric sonography is commonly used during pregnancy and is widely recognized by the public. 

In physics, the term "ultrasound" applies to all acoustic energy (longitudinal, mechanical wave) with a frequency 
above the audible range of human hearing. The audible range of sound is 20 hertz-20 kilohertz. Ultrasound is 
frequency greater than 20 kilohertz. 

Diagnostic applications 

1 - 

1.S/'S.3em SSHi 9:49:21AM 

1 3* i 

; - - 

■■■ — "' 

4 Ik 

Typical diagnostic sonographic scanners operate in the frequency 
range of 2 to 18 megahertz, though frequencies up to 50-100 
megahertz has been used experimentally in a technique known as 
biomicroscopy in special regions, such as the anterior chamber of eye. 
The above frequencies are hundreds of times greater than the limit of 
human hearing, which is typically accepted as 20 kilohertz. The choice 
of frequency is a trade-off between spatial resolution of the image and 
imaging depth: lower frequencies produce less resolution but image 
deeper into the body. 

Sonography (ultrasonography) is widely used in medicine. It is 

possible to perform both diagnosis and therapeutic procedures, using 

ultrasound to guide interventional procedures (for instance biopsies or 

drainage of fluid collections). Sonographers are medical professionals 

who perform scans for diagnostic purposes. Sonographers typically use a hand-held probe (called a transducer) that 

is placed directly on and moved over the patient. 

Sonography is effective for imaging soft tissues of the body. Superficial structures such as muscles, tendons, testes, 
breast and the neonatal brain are imaged at a higher frequency (7-18 MHz), which provides better axial and lateral 
resolution. Deeper structures such as liver and kidney are imaged at a lower frequency 1-6 MHz with lower axial and 
lateral resolution but greater penetration. 

Medical sonography is used in the study of many different systems: 

Orthogonal planes of a 3 dimensional 

sonographic volume with transverse and coronal 

measurements for estimating foetal cranial 

volume [1] , [2] 



See also 


Echocardiography is an essential tool in cardiology, to diagnose e.g. dilatation of parts of the heart and 
function of heart ventricles and valves 

see echocardiography 


Point of care ultrasound has many applications in the Emergency Department, including the Focused 
Assessment with Sonography for Trauma (FAST) exam for assessing significant hemoperitoneum or 
pericardial tamponade after trauma. Ultrasound is routinely used in the Emergency Department to 
expedite the care of patients with right upper quadrant abdominal pain who may have gallstones or 

see FAST exam 


In abdominal sonography, the solid organs of the abdomen such as the pancreas, aorta, inferior vena 
cava, liver, gall bladder, bile ducts, kidneys, and spleen are imaged. Sound waves are blocked by gas in 
the bowel and attenuated in different degree by fat, therefore there are limited diagnostic capabilities in 
this area. The appendix can sometimes be seen when inflamed e.g.: appendicitis. 


see gynecologic 

Medical ultrasonography 



for assessing blood flow and stenoses in the carotid arteries (Carotid ultrasonography) and the big 
intracerebral arteries 

see Carotid 
Intracerebral: see 
Transcranial Doppler 


Obstetrical ultrasound is commonly used during pregnancy to check on the development of the fetus. 

see obstetric 


see A-scan 


to determine, for example, the amount of fluid retained in a patient's bladder. In a pelvic sonogram, 

organs of the pelvic region are imaged. This includes the uterus and ovaries or urinary bladder. Men 

are sometimes given a pelvic sonogram to check on the health of their bladder and prostate. There are 

two methods of performing a pelvic sonography - externally or internally. The internal pelvic 

sonogram is performed either transvaginally (in a woman) or transrectally (in a man). Sonographic 

imaging of the pelvic floor can produce important diagnostic information regarding the precise 

relationship of abnormal structures with other pelvic organs and it represents a useful hint to treat 

patients with symptoms related to pelvic prolapse, double incontinence and obstructed defecation. 


tendons, muscles, nerves, ligaments, soft tissue masses, and bone surfaces 


To assess patency and possible obstruction of arteries Arterial sonography, diagnose DVT 
(Thrombosonography) and determine extent and severity of venous insufficiency (venosonography) 


Other types of uses include: 

• Intervenional; biopsy, emptying fluids, intrauterine transfusion (Hemolytic disease of the newborn) 

• Contrast-enhanced ultrasound 

A general-purpose sonographic machine may be used for most imaging purposes. Usually specialty applications may 
be served only by use of a specialty transducer. Most ultrasound procedures are done using a transducer on the 
surface of the body, but improved diagnostic confidence is often possible if a transducer can be placed inside the 
body. For this purpose, specialty transducers, including endovaginal, endorectal, and transesophageal transducers are 
commonly employed. At the extreme of this, very small transducers can be mounted on small diameter catheters and 
placed into blood vessels to image the walls and disease of those vessels. 

Therapeutic applications 

Therapeutic applications use ultrasound to bring heat or agitation into the body. Therefore much higher energies are 
used than in diagnostic ultrasound. In many cases the range of frequencies used are also very different. 

• Ultrasound is sometimes used to clean teeth in dental hygiene. 

• Ultrasound sources may be used to generate regional heating and mechanical changes in biological tissue, e.g. in 
occupational therapy, physical therapy and cancer treatment. However the use of ultrasound in the treatment of 
musculoskeletal conditions has fallen out of favor. 

• Focused ultrasound may be used to generate highly localized heating to treat cysts and tumors (benign or 
malignant), This is known as Focused Ultrasound Surgery (FUS) or High Intensity Focused Ultrasound (HIFU). 
These procedures generally use lower frequencies than medical diagnostic ultrasound (from 250 kHz to 

2000 kHz), but significantly higher energies. HIFU treatment is often guided by MRI. 

• Focused ultrasound may be used to break up kidney stones by lithotripsy. 

• Ultrasound may be used for cataract treatment by phacoemulsification. 

• Additional physiological effects of low-intensity ultrasound have recently been discovered, e.g. its ability to 
stimulate bone-growth and its potential to disrupt the blood-brain barrier for drug delivery. 

• Procoagulant at 5-12 MHz, 

Medical ultrasonography 


From sound to image 

The creation of an image from sound is done in three steps - producing a sound wave, receiving echoes, and 
interpreting those echoes. 

Producing a sound wave 

A sound wave is typically produced by a piezoelectric transducer 
encased in a housing which can take a number of forms. Strong, short 
electrical pulses from the ultrasound machine make the transducer ring 
at the desired frequency. The frequencies can be anywhere between 2 
and 18 MHz. The sound is focused either by the shape of the 
transducer, a lens in front of the transducer, or a complex set of control 
pulses from the ultrasound scanner machine (Beamforming). This 
focusing produces an arc-shaped sound wave from the face of the 
transducer. The wave travels into the body and comes into focus at a 
desired depth. 

Older technology transducers focus their beam with physical lenses. 
Newer technology transducers use phased array techniques to enable 
the sonographic machine to change the direction and depth of focus. 
Almost all piezoelectric transducers are made of ceramic. 

Materials on the face of the transducer enable the sound to be 
transmitted efficiently into the body (usually seeming to be a rubbery 
coating, a form of impedance matching). In addition, a water-based gel 
is placed between the patient's skin and the probe. 

The sound wave is partially reflected from the layers between different tissues. Specifically, sound is reflected 
anywhere there are density changes in the body: e.g. blood cells in blood plasma, small structures in organs, etc. 
Some of the reflections return to the transducer. 

Medical sonographic instrument 

Receiving the echoes 

The return of the sound wave to the transducer results in the same process that it took to send the sound wave, except 
in reverse. The return sound wave vibrates the transducer, the transducer turns the vibrations into electrical pulses 
that travel to the ultrasonic scanner where they are processed and transformed into a digital image. 

Forming the image 

The sonographic scanner must determine three things from each received echo: 

1. How long it took the echo to be received from when the sound was transmitted. 

2. From this the focal length for the phased array is deduced, enabling a sharp image of that echo at that depth (this 
is not possible while producing a sound wave). 

3. How strong the echo was. It could be noted that sound wave is not a click, but a pulse with a specific carrier 
frequency. Moving objects change this frequency on reflection, so that it is only a matter of electronics to have 
simultaneous Doppler sonography. 

Once the ultrasonic scanner determines these three things, it can locate which pixel in the image to light up and to 
what intensity and at what hue if frequency is processed (see redshift for a natural mapping to hue). 

Transforming the received signal into a digital image may be explained by using a blank spreadsheet as an analogy. 
First picture a long, flat transducer at the top of the sheet. Send pulses down the 'columns' of the spreadsheet (A, B, 

Medical ultrasonography 129 

C, etc.)- Listen at each column for any return echoes. When an echo is heard, note how long it took for the echo to 
return. The longer the wait, the deeper the row (1,2,3, etc.). The strength of the echo determines the brightness 
setting for that cell (white for a strong echo, black for a weak echo, and varying shades of grey for everything in 
between.) When all the echoes are recorded on the sheet, we have a greyscale image. 

Displaying the image 

Images from the sonographic scanner can be displayed, captured, and broadcast through a computer using a frame 
grabber to capture and digitize the analog video signal. The captured signal can then be post-processed on the 
computer itself. 

For computational details see also: Confocal laser scanning microscopy, Radar, 

Sound in the body 

Ultrasonography (sonography) uses a probe containing one or more 

acoustic transducers to send pulses of sound into a material. Whenever 

a sound wave encounters a material with a different density (acoustical 

impedance), part of the sound wave is reflected back to the probe and 

is detected as an echo. The time it takes for the echo to travel back to 

the probe is measured and used to calculate the depth of the tissue 

interface causing the echo. The greater the difference between acoustic 

impedances, the larger the echo is. If the pulse hits gases or solids, the Linear Array Transducer 

density difference is so great that most of the acoustic energy is 

reflected and it becomes impossible to see deeper. 

The frequencies used for medical imaging are generally in the range of 1 to 18 MHz. Higher frequencies have a 
correspondingly smaller wavelength, and can be used to make sonograms with smaller details. However, the 
attenuation of the sound wave is increased at higher frequencies, so in order to have better penetration of deeper 
tissues, a lower frequency (3-5 MHz) is used. 

Seeing deep into the body with sonography is very difficult. Some acoustic energy is lost every time an echo is 

formed, but most of it (approximately 0.3 , f? , TTT ) is lost from acoustic absorption. 

cm deptn-Mhtz 

The speed of sound is varies as it travels through different materials, and is dependent on the acoustical impedance 
of the material. However, the sonographic instrument assumes that the acoustic velocity is constant at 1540 m/s. An 
effect of this assumption is that in a real body with non-uniform tissues, the beam becomes somewhat de-focused 
and image resolution is reduced. 

To generate a 2D-image, the ultrasonic beam is swept. A transducer may be swept mechanically by rotating or 
swinging. Or a ID phased array transducer may be use to sweep the beam electronically. The received data is 
processed and used to construct the image. The image is then a 2D representation of the slice into the body. 

3D images can be generated by acquiring a series of adjacent 2D images. Commonly a specialised probe that 
mechanically scans a conventional 2D-image transducer is used. However, since the mechanical scanning is slow, it 
is difficult to make 3D images of moving tissues. Recently, 2D phased array transducers that can sweep the beam in 
3D have been developed. These can image faster and can even be used to make live 3D images of a beating heart. 

Doppler ultrasonography is used to study blood flow and muscle motion. The different detected speeds are 
represented in color for ease of interpretation, for example leaky heart valves: the leak shows up as a flash of unique 
color. Colors may alternatively be used to represent the amplitudes of the received echoes. 

Medical ultrasonography 



Modes of sonography 

Several different modes of ultrasound are used in medical imaging/' J These are: 

• A-mode: A-mode is the simplest type of ultrasound. A single transducer scans a line through the body with the 
echoes plotted on screen as a function of depth. Therapeutic ultrasound aimed at a specific tumor or calculus is 
also A-mode, to allow for pinpoint accurate focus of the destructive wave energy. 

• B-mode: In B-mode ultrasound, a linear array of transducers simultaneously scans a plane through the body that 
can be viewed as a two-dimensional image on screen. 

• M-mode: M stands for motion. In m-mode a rapid sequence of B-mode scans whose images follow each other in 
sequence on screen enables doctors to see and measure range of motion, as the organ boundaries that produce 
reflections move relative to the probe. 

• Doppler mode: This mode makes use of the Doppler effect in measuring and visualizing blood flow 

• Color doppler: Velocity information is presented as a color coded overlay on top of a B-mode image 

• Continuous doppler: Doppler information is sampled along a line through the body, and all velocities 
detected at each time point is presented (on a time line) 

• Pulsed wave (PW) doppler: Doppler information is sampled from only a small sample volume (defined in 2D 
image), and presented on a timeline 

• Duplex: a common name for the simultaneous presentation of 2D and (usually) PW doppler information. 
(Using modern ultrasound machines color doppler is almost always also used, hence the alternative name 

Midwives generally use this type of system. 

Doppler sonography 

Sonography can be enhanced with Doppler measurements, which 
employ the Doppler effect to assess whether structures (usually blood) 
are moving towards or away from the probe, and its relative velocity. 
By calculating the frequency shift of a particular sample volume, for 
example flow in an artery or a jet of blood flow over a heart valve, its 
speed and direction can be determined and visualised. This is 
particularly useful in cardiovascular studies (sonography of the 
vascular system and heart) and essential in many areas such as 
determining reverse blood flow in the liver vasculature in portal 
hypertension. The Doppler information is displayed graphically using 
spectral Doppler, or as an image using color Doppler (directional 
Doppler) or power Doppler (non directional Doppler). This Doppler 
shift falls in the audible range and is often presented audibly using 
stereo speakers: this produces a very distinctive, although synthetic, 
pulsating sound. 

Most modern sonographic machines use pulsed Doppler to measure 
velocity. Pulsed wave machines transmit and receive series of pulses. 
The frequency shift of each pulse is ignored, however the relative 
phase changes of the pulses are used to obtain the frequency shift 
(since frequency is the rate of change of phase). The major advantages 
of pulsed Doppler over continuous wave is 

Spectral Doppler of Common Carotid Artery 


Hjjlidfc-- ■ u ■ .*■■ ; -** 1 " v J 1 

^^^Bl ^^^^^[^^BC3 

■ ■!■■ 


Colour Doppler of Common Carotid Artery 

that distance information is 

Medical ultrasonography 


obtained (the time between the transmitted and received pulses can be 
converted into a distance with knowledge of the speed of sound) and 
gain correction is applied. The disadvantage of pulsed Doppler is that 
the measurements can suffer from aliasing. The terminology "Doppler 
ultrasound" or "Doppler sonography", has been accepted to apply to 
both pulsed and continuous Doppler systems despite the different 
mechanisms by which the velocity is measured. 

It should be noted here that there are no standards for the display of 
color Doppler. Some laboratories insist on showing arteries as red and 
veins as blue, as medical illustrators usually show them, even though, 

as a result, a tortuous vessel may have portions with flow toward and away relative to the transducer. This can result 
in the illogical appearance of blood flow that appears to be in both directions in the same vessel. Other laboratories 
use red to indicate flow toward the transducer and blue away from the transducer which is the reverse of 150 years of 
astronomical literature on the Doppler effect. Still other laboratories prefer to display the sonographic Doppler color 
map more in accord with the prior published physics with the red shift representing longer waves of echoes 
(scattered) from blood flowing away from the transducer; and with blue representing the shorter waves of echoes 
reflecting from blood flowing toward the transducer. Because of this confusion and lack of standards in the various 
laboratories, the sonographer must understand the underlying acoustic physics of color Doppler and the physiology 

of normal and abnormal blood flow in the human body 

[8] [9] [10] [11] 

Contrast media 

The use of microbubble contrast media in medical sonography to improve ultrasound signal backscatter is known as 
contrast-enhanced ultrasound. This technique is currently used in echocardiography, and may have future 
applications in molecular imaging and drug delivery. 

Compression ultrasonography 

Compression ultrasonography is a technique used for diagnosing deep vein thrombosis and combines 


ultrasonography of the deep veins with venous compression. The technique can be used on deep veins of the 
upper and lower extremities, with some laboratories limiting the examination to the common femoral vein and the 
popliteal vein, whereas other laboratories examine the deep veins from the inguinal region to the calf, including the 

calf veins 


Compression ultrasonography in B-mode has both high sensitivity and specificity for detecting proximal deep vein 
thrombosis in symptomatic patients. The sensitivity lies somewhere between 90 to 100% for the diagnosis of 


symptomatic deep vein thrombosis, and the specificity ranges between 95 to 100%. 

Medical ultrasonography 132 


As with all imaging modalities, ultrasonography has its list of positive and negative attributes. 


• It images muscle, soft tissue, and bone surfaces very well and is particularly useful for delineating the interfaces 
between solid and fluid-filled spaces. 

• It renders "live" images, where the operator can dynamically select the most useful section for diagnosing and 
documenting changes, often enabling rapid diagnoses. Live images also allow for ultrasound-guided biopsies or 
injections, which can be cumbersome with other imaging modalities. 

• It shows the structure of organs. 

• It has no known long-term side effects and rarely causes any discomfort to the patient. 

• Equipment is widely available and comparatively flexible. 

• Small, easily carried scanners are available; examinations can be performed at the bedside. 

• Relatively inexpensive compared to other modes of investigation, such as computed X-ray tomography, DEXA or 
magnetic resonance imaging. 

• Spatial resolution is better in high frequency ultrasound transducers than it is in most other imaging modalities. 

• Through the use of an Ultrasound research interface, an ultrasound device can offer a relatively inexpensive, 
real-time, and flexible method for capturing data required for special research purposes for tissue characterization 
and development of new image processing techiniques 


• Sonographic devices have trouble penetrating bone. For example, sonography of the adult brain is very limited 
though improvements are being made in transcranial ultrasonography. 

• Sonography performs very poorly when there is a gas between the transducer and the organ of interest, due to the 
extreme differences in acoustic impedance. For example, overlying gas in the gastrointestinal tract often makes 
ultrasound scanning of the pancreas difficult, and lung imaging is not possible (apart from demarcating pleural 

• Even in the absence of bone or air, the depth penetration of ultrasound may be limited depending on the frequency 
of imaging. Consequently, there might be difficulties imaging structures deep in the body, especially in obese 

• Body habitus has a large influence on image quality, image quality and accuracy of diagnosis is limited with 
obese patients, overlying subcutaneous fat attuates the sound beam and a lower frequency tranducer is required 
(with lower resolution) 

The method is operator-dependent. A high level of skill and experience is needed to acquire good-quality images and 
make accurate diagnoses. 

• There is no scout image as there is with CT and MRI. Once an image has been acquired there is no exact way to 
tell which part of the body was imaged. 

Risks and side-effects 

Ultrasonography is generally considered a "safe" imaging modality. However slight detrimental effects have been 

occasionally observed (see below). Diagnostic ultrasound studies of the foetus are generally considered to be safe 

during pregnancy. This diagnostic procedure should be performed only when there is a valid medical indication, and 

the lowest possible ultrasonic exposure setting should be used to gain the necessary diagnostic information under the 

"as low as reasonably achievable" or ALARA principle. 

Medical ultrasonography 133 


World Health Organizations technical report series 875(1998). supports that ultrasound is harmless: "Diagnostic 
ultrasound is recognized as a safe, effective, and highly flexible imaging modality capable of providing clinically 
relevant information about most parts of the body in a rapid and cost-effective fashion". Although there is no 
evidence ultrasound could be harmful for the foetus, US Food and Drug Administration views promotion, selling, or 
leasing of ultrasound equipment for making "keepsake foetal videos" to be an unapproved use of a medical device. 

Studies on the safety of ultrasound 

• A study at the Yale School of Medicine found a correlation between prolonged and frequent use of ultrasound and 
abnormal neuronal migration in mice. A meta-analysis of several ultrasonography studies found no statistically 
significant harmful effects from ultrasonography, but mentioned that there was a lack of data on long-term 
substantive outcomes such as neurodevelopment. 


Diagnostic and therapeutic ultrasound equipment is regulated in the USA by the FDA, and worldwide by other 
national regulatory agencies. The FDA limits acoustic output using several metrics. Generally other regulatory 
agencies around the world accept the FDA-established guidelines. 

Currently New Mexico is the only state in the USA which regulates diagnostic medical sonographers. Certification 

examinations for sonographers are available in the US from three organizations: The American Registry of 

n7i nsi 

Diagnostic Medical Sonography .Cardiovascular Credentialing International and the American Registry of 


Radiological Technologists 

The primary regulated metrics are MI (Mechanical Index) a metric associated with the cavitation bio-effect, and TI 
(Thermal Index) a metric associated with the tissue heating bio-effect. The FDA requires that the machine not 
exceed limits that they have established. This requires self-regulation on the part of the manufacturer in terms of the 
calibration of the machine. The established limits are reasonably conservative so as to maintain diagnostic ultrasound 
as a safe imaging modality. 

In India, lack of social security and consequent preference for a male child has popularized the use of ultrasound 
technology to identify and abort female foetuses. India's Pre-natal Diagnostic Techniques act makes use of 
ultrasound for sex selection illegal, but unscrupulous Indian doctors and would-be parents continue to 
discriminate against the girl child. 

Career Information 


According to the Society of Diagnostic Medical Sonography , a diagnostic medical sonographer in the United 
States of America earns an average of $66,768 (2008). Sonographers work in a variety of settings including 

hospitals, clinics, physician offices, and mobile labs. Some even use their skills and knowledge in veterinary offices. 

Information about a career in Diagnostic Medical Sonography is available from the Society of Diagnostic 

Medical Sonography. The US Department of Labor also provides information about the field in its Occupation 

Outlook Handbook [24] . 

Medical ultrasonography 1 34 

United States 

Ultrasonic energy was first applied to the human body for medical purposes by Dr. George Ludwig at the Naval 
Medical Research Institute, Bethesda, Maryland in the late 1940s. English born and educated John Wild 

(1914—2009) first used ultrasound to assess the thickness of bowel tissue as early as 1949: for his early work he has 

been described as the "father of medical ultrasound". 

In 1962, after about two years of work, Joseph Holmes, William Wright, and Ralph Meyerdirk developed the first 
compound contact B-mode scanner. Their work had been supported by U.S. Public Health Services and the 
University of Colorado. Wright and Meyerdirk left the University to form Physionic Engineering Inc., which 
launched the first commercial hand-held articulated arm compound contact B-mode scanner in 1963. This was the 


start of the most popular design in the history of ultrasound scanners. 

The first demonstration of color Doppler was by Geoff Stevenson, who was involved in the early developments and 
medical use of Doppler shifted ultrasonic energy. 


Medical ultrasonography was used 1953 at Lund University by cardiologist Inge Edler and Carl Hellmuth Hertz, the 
son of Gustav Ludwig Hertz, who was a graduate student at the department of nuclear physics. 

Edler had asked Hertz if it was possible to use radar to look into the body, but Hertz said this was impossible. 
However, he said, it might be possible to use ultrasonography. Hertz was familiar with using ultrasonic 
reflectoscopes for nondestructive materials testing, and together they developed the idea of using this method in 

The first successful measurement of heart activity was made on October 29, 1953 using a device borrowed from the 
ship construction company Kockums in Malmo. On December 16 the same year, the method was used to generate an 
echo-encephalogram (ultrasonic probe of the brain). Edler and Hertz published their findings in 1954. 


Parallel developments in Glasgow, Scotland by Professor Ian Donald and colleagues at the Glasgow Royal Maternity 
Hospital (GRMH) led to the first diagnostic applications of the technique. Donald was an obstetrician with a 
self-confessed "childish interest in machines, electronic and otherwise", who, having treated the wife of one of the 
company's directors, was invited to visit the Research Department of boilermakers Babcock & Wilcox at Renfrew, 
where he used their industrial ultrasound equipment to conduct experiments on various morbid anatomical 
specimens and assess their ultrasonic characteristics. Together with the medical physicist Tom Brown and fellow 

obstetrician Dr John MacVicar, Donald refined the equipment to enable differentiation of pathology in live volunteer 

patients. These findings were reported in The Lancet on 7 June 1958 as "Investigation of Abdominal Masses by 

Pulsed Ultrasound" - possibly one of the most important papers ever published in the field of diagnostic medical 


At GRMH, Professor Donald and Dr James Willocks then refined their techniques to obstetric applications including 
foetal head measurement to assess the size and growth of the foetus. With the opening of the new Queen Mother's 
Hospital in Yorkhill in 1964, it became possible to improve these methods even further. Dr Stuart Campbell's 
pioneering work on foetal cephalometry led to it acquiring long-term status as the definitive method of study of 
foetal growth. As the technical quality of the scans was further developed, it soon became possible to study 
pregnancy from start to finish and diagnose its many complications such as multiple pregnancy, foetal abnormality 
and placenta praevia. Diagnostic ultrasound has since been imported into practically every other area of medicine. 

Medical ultrasonography 135 

See also 

• Emergency ultrasound 

• 3D ultrasound 

• Duplex ultrasonography 

• Doppler fetal monitor 

• European Master in Molecular Imaging 


[I] ""Foetal Biometry: Vertical Calvarial Diameter and Calvarial Volume ". July 2000" (http://jdm.sagepub.eom/cgi/content/abstract/l/5/ 
205). . Retrieved 2008-09-27. 

[2] ""3D BPD Correction". July 2000" ( . Retrieved 2008-09-27. 
[3] Sonography of the female pelvic floor ( 

sonography_female_pelvic_floor_clinical_indications_and_techniques.html) Clinical indications and techniques 
[4] A Review of Therapeutic Ultrasound: Effectiveness Studies, Valma J Robertson, Kerry G Baker, Physical Therapy . Volume 8 1 . Number 7 . 

July 2001 
[5] A Review of Therapeutic Ultrasound: Biophysical Effects, , Kerry G Baker, et al., Physical Therapy . Volume 81 . Number 7 . July 2001 
[6] Capture and Store Gynecological Ultrasounds 
[7] The Gale Encyclopedia of Medicine, 2nd Edition Volume 1 A-B. Page no.4 
[8] "Wikipedia: "Red Shift"" ( . Retrieved January 25, 2008. 
[9] "Ellis, George FR, Williams, Ruth M.; "Flat and Curved Space-Times" 2nd Edition; Oxford University Press, 2000"" ( 

com/gp/reader/0198506562/ref=sib_dp_ptu#reader-link). . Retrieved January 25, 2008. 
[10] "DuBose TJ, Baker AL; "Confusion and Direction in Diagnostic Doppler Sonography "" ( 

25/3/173/ref=sib_dp_ptu#reader-link). . Retrieved January 25, 2009. 

[II] ""Doppler Ultrasound History"" (http://www.obgyn. net/ultrasound/ultrasound. asp?page=feature/doppler_history/history_ultrasound). . 
Retrieved January 25, 2008. 

[12] > The Diagnostic Approach to Deep Venous Thrombosis: Diagnostic Tests for Deep Vein Thrombosis (http://www. Semin Respir Crit Care Med. 2000;21(6). 2000 Thieme Medical Publishers 
[13] Merritt, CR (1 November 1989). "Ultrasound safety: what are the issues?" (http://radiology.rsnajnls.Org/cgi/reprint/173/2/304). 

Radiology 173 (2): 304-306. PMID 2678243. . Retrieved 2008-01-22. 
[15] Ang Jr., ES; Gluncic V, Duque A et al. (2006). "Prenatal exposure to ultrasound waves impacts neuronal migration in mice" (http://www. Proc Natl Acad Sci USA 103 (34): 12903-10. doi: 10. 1073/pnas.0605294103. PMID 16901978. 

PMC 1538990. . Retrieved 2008-01-22. 
[16] Bricker L, Garcia J, Henderson J, et al. (2000). "Ultrasound screening in pregnancy: a systematic review of the clinical effectiveness, 

cost-effectiveness and women's views" ( Health technology assessment (Winchester, 

England) 4 (16): i-vi, 1-193. PMID 11070816. . 
[20] "Safety of diagnostic ultrasound in fetal scanning" ( 

chapter_02.htm). . Retrieved 2010-05-02. 
[21] "PNDT ACT, 1994" ( PNDT ACT (PRINCIPAL ACT)1994.htm). . Retrieved 2010-05-02. 
[25] "History of the AIUM" (http://web.archive.Org/web/20051103064941/ 

Archived from the original ( on November 3, 2005. . Retrieved November 15, 2005. 
[26] "The History of Ultrasound: A collection of recollections, articles, interviews and images" (http://www. asp?page=/us/ 

news_articles/ultrasound_history/asp-history-toc). . Retrieved 2006-05-11. 
[27] British Medical Journal, 2009, 339:b4428 
[28] Woo, Joseph (2002). "A short History of the development of Ultrasound in Obstetrics and Gynecology" ( 

historyl.html). . Retrieved 2007-08-26. 
[29] "Doppler Ultrasound History" ( . Retrieved 2006-05-11. 
[30] Edler I, Hertz CH. The use of ultrasonic reflectoscope for the continuous recording of movements of heart walls. Kungl Fzsiogr Sallsk i 

Lund Forhandl. 1954;24:5. Reproduced in Clin Physiol Funct Imaging 2004;24:1 18-36. PMID 15165281. 
[31] Donald I, Mac Vicar J, Brown TG. Investigation of abdominal masses by pulsed ultrasound. Lancet 1958; 1(7032): 1188-95. PMID 13550965 

Medical ultrasonography 136 

External links 

• American Institute of Ultrasound in Medicine ( Professional Association 

• About the discovery of medical ultrasonography ( 

• History of medical sonography (ultrasound) ( 

• Procedures in Ultrasound (Sonography) ( cfm?modal=US) for 
patients, from 

• Careers in the vascular ultrasound field ( 

• Sonography of the female pelvic floor: clinical indications and techniques ( 
practical/sonography _female_pelvic_floor_clinical_indications_and_techniques.html) Illustrate the clinical 
utility of this non-invasive diagnostic technique. 

• How to Become an Ultrasound Technician - a wiki article on becoming an ultrasound technician. 

• A Pilot Study of Comprehensive Ultrasound Education at the Wayne State University School of Medicine: http:// 


Advanced Experimental Techniques and 


Optical Tomography and Imaging 

Optical imaging is an imaging technique. 

Optics usually describes the behavior of visible, ultraviolet, and infrared light used in imaging. 

Because light is an electromagnetic wave, similar phenomena occur in X-rays, microwaves, radio waves. Chemical 
imaging or molecular imaging involves inference from the deflection of light emitted from (e.g. laser, 

infrared) source to structure, texture, anatomic and chemical properties of material (e.g. crystal, cell tissue). Optical 
imaging systems may be divided into diffusive and ballistic imaging systems. 

Diffusive optical imaging in neuroscience 

Diffusive optical imaging (also known as Near Infrared Optical Tomography or NIROT) is a technique that gives 
neuroscientists the ability to simultaneously obtain information about the source of neural activity as well as its time 
course. In other words, it allows them to "see" neural activity and study the functioning of the brain. 

In this method, a near-infrared laser is positioned on the scalp. Detectors composed of optical fiber bundles are 
located a few centimeters away from the light source. These detectors sense how the path of light is altered, either 
through absorption or scattering, as it traverses brain tissue. 

This method can provide two types of information. First, it can be used to measure the absorption of light, which is 
related to concentration of chemicals in the brain. Second, it can measure the scattering of light, which is related to 
physiological characteristics such as the swelling of glia and neurons that are associated with neuronal firing. 

Typical applications include rapid 2D optical topographic imaging of the event-related optical signal (EROS) or 
Near infrared spectroscopy (NIRS) signal following brain activity and tomographic reconstruction of an entire 3D 
volume of tissue to diagnose breast cancer or neonatal brain haemorrhage. The spatial resolution of DOT techniques 
is several millimeters, comparable to the lower end of functional magnetic resonance imaging (fMRI). The temporal 
resolution of EROS is very good, comparable to electroencephalography, and magnetoencephalography 
(-milliseconds), while that of NIRS, which measures hemodynamic changes rather than neuronal activity, is 
comparable to fMRI (-seconds). DOT instruments are relatively low cost ($150,000), portable and immune to 
electrical interference. The signal-to-noise ratio of NIRS is quite good, enabling detection of responses to single 
events in many cases. EROS signals are much weaker, typically requiring averaging of many responses. 

Important chemicals that this method can detect include hemoglobin and cytochromes. 

Optical Tomography and Imaging 138 

Ballistic optical imaging 

Ballistic optical imaging systems ignore the diffused photons and rely only on the ballistic photons to create 
high-resolution (near diffraction limited) images through scattering media. 

See also 

• Photon diffusion 

• Ballistic imaging 

• Photon diffusion equation 


[1] Weissleder, R., Mahmood, U., Molecular Imaging. Radiology 2001; 219:316—333. (http://radiology.rsnajnls.Org/cgi/reprint/219/2/316) I 

Download PDF 
[2] Gambhir, S.S., Massoud, T.F., Molecular imaging in living subjects: seeing fundamental biological processes in a new light. Genes & 

Development. (2003) 17:545—580. Download PDF ( GliomaZinn.pdf) 
[3] Olive D.M., Kovar, J.L., Simpson, M.A., Schutz-Geschwender, A., A systematic approach to the development of fluorescent contrast agents 

for optical imaging of mouse cancer models, Analytical Biochemistry 2007;(367), #1, 1—12. Download PDF ( 

[4] A. Gibson, J. Hebden, and S. Arridge. "Recent advances in diffuse optical imaging" ( 

Gibsonetal05Review.pdf). Phys. Med. Biol. 50, R1-R43 (2005).. . 
[5] S. Farsiu, J. Christofferson, B. Eriksson, P. Milanfar, B. Friedlander, A. Shakouri, R. Nowak. "Statistical Detection and Imaging of Objects 

Hidden in Turbid Media Using Ballistic Photons" ( 

Applied Optics, vol. 46, no. 23, pp. 5805-5822, Aug. 2007.. . 

External links 

• Understanding Near-Infrared Imaging ( 
pearl_sensitivity.jsp/) — Resource to better understand the benefits of Near-Infrared imaging. 

• Diffuse Optics Lab at University of Pennsylvania, Philadelphia ( 

• DOI at Massachusetts General Hospital, Boston ( 

• Biomedical Imaging Group at Dartmouth ( 

• DOS/I Lab at the Beckman Laser Institute, University of California, Irvine ( 

• A review article in the field by A.P. Gibson et al. (http://www.iop.Org/EJ/abstract/0031-9155/50/4/R01) 

• An article on optical breast imaging ( 

• Illinois ECE 460 Principles of Optical Imaging ( Course lecture notes 

Breast cancer screening 


Breast cancer screening 

Breast cancer screening refers to testing otherwise-healthy women for breast cancer in an attempt to achieve an 
earlier diagnosis. The assumption is that early detection will improve outcomes. A number of screening test have 
been employed including: clinical and self breast exams, mammography, genetic screening, ultrasound, and 
magnetic resonance imaging. 

A clinical or self breast exam involves feeling the breast for lumps or other abnormalities. Evidence however does 
not support its use. Mammographic screening for breast cancer is also controversial. The Cochrane collaboration 
in 2009 concluded that it is unclear whether screening does more good than harm. Many national organizations 
still however recommend it. If mammography is decided up it should only be done every two years in women 
between the ages of 50 and 74. Several tools are available to help target breast cancer screening to older women 
with longer life expectancies. 

Abnormal findings on screening are further investigated by surgically removing a piece of the suspicious lumps 
(biopsy) to examine them under the microscope. Ultrasound may be used to guide the biopsy needle during the 
procedure. Magnetic resonance imaging is used to guide treatment, but is not an established screening method for 
healthy women. 

Women with a family history of breast and ovarian cancer have a higher risk of mutations of the BRCA1 and 
BRCA2 genes. These mutations result in a higher risk of breast cancer. Testing for these genes is expensive and not 
done routinely. However those with this mutation should be screened more aggressively: starting at an earlier age, 
with greater frequency, and possibly with magnetic resonance imaging. 

Breast exam 

Breast examination ( either clinical breast exams (CBE) by a health 
care provider or by self exams ) were once widely recommended. They 
however are not supported by evidence and may contribute to harm. 
Their use in women without symptoms and at low risk is thus 

A 2003 Cochrane review found no benefit in terms of mortality from 
screening by breast self-examination or by clinical exam, but rather 
that they do possible harm in terms of increased numbers of benign 
lesions identified and an increased number of biopsies performed. 
They conclude "screening by breast self-examination or physical 

examination cannot be recommended." 


An pictorial example of breast self-examination 

in six steps. Steps 1-3 involve inspection of the 

breast with the arms hanging next to the body, 

behind the head and in the side. Step 4 is 

palpation of the breast. Step 5 is palpation of the 

nipple. Step 6 is palpation of the breast while 

lying down. 

Breast cancer screening 



Mammography is a common screening method, since it is relatively 
fast and widely available in developed countries. When detected by 
mammography breast cancers are usually smaller (in an earlier stage) 
than those detected by patients or doctors as a breast lump, and 
presumably treatment in an earlier stage will improve outcome. This 
assertion however has been challenged by recent reviews which have 
found the significance of these benefits to be questionable. 

A 2009 Cochrane review estimated that mammography in women 
between 50 and 75 years old results in a relative risk reduction of death 
from breast cancer of 15% or an absolute risk reduction of 0.05%. 
Those who have mammograms however end up with increased 

surgeries, chemotherapy, radiotherapy and other potentially procedures resulting from the over-detection of harmless 

121 T21 

lumps. Consequently, the value of routine mammography in women at low or average risk is controversial. With 

unnecessary treatment of ten women for every one woman whose live was prolonged, the authors concluded that 

Normal (left) versus cancerous (right) 
mammography image. 

routine mammography may do more harm than good 


A 2009 U.S. Preventive Services Task Force analysis come to similar conclusions: for women in their 40s 2000 
would need to be screened for 10 years, resulting in 1000 people with false results and 250 unnecessary biopsies to 
prevent 1 breast cancer death, for women in their 50s 1339 would need to be screened for 10 years to prevent a 
single breast cancer related death, and for women in their 60s 377 women would need to be screened for 10 years. 

An analysis of Norwegians published in 2010 found a 10% reduction in breast cancer mortality (2.4 deaths per 

100,000 person-years) attributable to screening but this difference was non significant. 

For places that recommend screening the age at which this should begin and how frequently women differ around 
the world. In the UK, all women are invited for screening once every three years beginning at age 50. As of 2009 the 

US Preventive Services Task Force recommends that women over the age of 50 receive mammography once every 

two years. 

Women at higher risk may benefit from earlier or more frequent screening. Women with one or more first-degree 
relatives (mother, sister, daughter) with premenopausal breast cancer often begin screening at an earlier age, perhaps 
at an age 10 years younger than the age when the relative was diagnosed with breast cancer. 

Mammography is not generally considered as an effective screening technique for women less than 50 years old. A 

systematic review by the American College of Physicians concluded that, for women 40 to 49 years of age, the risks 

of mammography outweighed the benefits, and the US Preventive Services Task Force says that the evidence in 


favor of routine screening of women under the age of 50 is "weak". Part of the difficulty in interpreting 
mammograms in younger women stems from breast density. Radiographically, a dense breast has a preponderance of 
glandular tissue, and younger age or estrogen hormone replacement therapy contribute to mammographic breast 
density. After menopause, the breast glandular tissue gradually is replaced by fatty tissue, making mammographic 
interpretation much more accurate. Some authors speculate that part of the contribution of estrogen Hormone 
replacement therapy to breast cancer mortality arises from the issue of increased mammographic breast density. 

Breast cancer screening 141 


In general, digital mammography and computer-aided mammography have increased the sensitivity of 
mammograms, but this may come at the cost of more numerous false positive results.. 

Computer-aided diagnosis(CAD) Systems may help radiologists to evaluate X-ray images to detect breast cancer in 
an early stage. CAD is especially established in US and the Netherlands. It is used in addition to the human 
evaluation of the diagnostician. 

Health programs 

In 2005, 67.9% of all U.S. women age 40—64 had a mammogram in the past two years (74.5% of women with 
private health insurance, 56.1% of women with Medicaid insurance, 38.1% of currently uninsured women, and 
32.9% of women uninsured for > 12 months). All U.S. states (except Utah) mandate that private health insurance 
plans and Medicaid provide some coverage for breast cancer screening. Section 4101 of the Balanced Budget Act 
of 1997 required that Medicare (available to those aged 65 or older or who have been on Social Security Disability 
Insurance for over 2 years), effective January 1, 1998, cover and waive the Part B deductible for annual screening 
mammography in women aged 40 or older. 

All organized breast cancer screening programs in Canada offer clinical breast examinations for women aged 40 and 

over and screening mammography every two years for women aged 50—69. In 2003, about 61% of women aged 


50—69 in Canada reported having had a mammogram within the past two years. 

The NHS Breast Screening Programme, the first of its kind in the world, began in 1988 and achieved national 
coverage in the mid-1990s, provides free breast cancer screening mammography every three years for all women in 


the UK aged 50 and over. As of March 31, 2006, 75.9% of women aged 53—64 resident in England had been 
screened at least once in the previous three years. 

The Australian national breast screening program, BreastScreen Australia, was commenced in the early 1990s and 
invites women aged 50—69 to screening every 2 years. No routine clinical examination is performed, and the cost of 
screening is free to the point of diagnosis. 

The Singapore national breast screening program, BreastScreen Singapore, is the only publicly funded national 
breast screening program in Asia, and enrols women aged 50—64 for screening every two years. Like the Australian 
system, no clinical examination is performed routinely. Unlike most national screening systems however, clients 
have to pay half of the cost of the screening mammogram; this is in line with the Singapore health system's core 
principle of co-payment for all health services. 


Some scientific groups however have expressed concern about the public's perceptions of the benefits of breast 



Data reported in the UK Million Woman Study indicates that if 134 mammograms are performed, 20 women will be 
called back for suspicious findings, and four biopsies will be necessary, to diagnose one cancer. Recall rates are 
higher in the U.S. than in the UK. The contribution of mammography to the early diagnosis of cancer is 
controversial, and for those found with benign lesions, mammography can create a high psychological and financial 
cost. For those diagnosed with cancer, mammography can be the difference between a lumpectomy versus metastatic 


Screening leads to false positive results and subsequent invasive procedures. 

Nevertheless, surveys have shown that most women participating in mammography screening programs accept the 
risk of false positive recall and the majority do not find this highly distressing. The majority of women recalled will 
undergo additional imaging only, without any further intervention. There is some debate over how harmful such 
noninvasive recall assessment truly is. 

Breast cancer screening 142 

A major effect of routine breast screening is to greatly increase the rate of early breast cancer detection, in particular 
for preinvasive ductal carcinoma in situ (DCIS), which is almost always impalpable and which cannot, for the most 
part, be detected reliably by any other test. While this ability to detect such very early breast malignancies is at the 
heart of claims that screening mammography can improve survival from breast cancer, it is also controversial. This is 
because a large percentage of such cases will almost certainly not progress to kill the patient, and thus 
mammography cannot be genuinely claimed to have been beneficial in such cases; in fact, it would lead to increased 
morbidity and unnecessary surgery for such patients. 

It has thus been claimed that finding and treating many cases of DCIS represents "overdiagnosis" and 
"overtreatment". However, it is not possible to accurately predict which patients with DCIS will have an indolent 
nonfatal course, and which will inevitably progress to invasive cancer and death if left untreated. Consequently, all 
patients with DCIS are treated in much the same way, with at least wide local excision, and sometimes mastectomy 
if the DCIS is very extensive. The cure rate for DCIS if treated appropriately is extremely high. Thus, it can be 
argued that some women with DCIS detected by screening mammography do in fact benefit from screening, even if 
others do not. Any refinement of this therapeutic approach to breast malignancy requires further research and the 
development of methods that accurately predict the future cellular fate and biological behaviour of early cancers. 

It is salient to note that the phenomenon of finding preinvasive malignancy or nonmalignant benign disease is 
commonplace in all forms of cancer screening, including pap smears for cervical cancer, fecal occult blood testing 
for colon cancer, and prostate-specific antigen testing for prostate cancer. All of these tests have the potential to 
detect very early malignancy before it becomes symptomatic, and to potentially lead to long term cure. All of them 
have false positives, and can lead to invasive procedures that may not benefit the patient. 

Given current limitations in knowledge and technology, the only way to avoid "overdiagnosis" and "overtreatment", 
and to eliminate all harms inherent in breast cancer screening is not to screen at all, and thus to eliminate any benefit 
to at least some of the screened population that screening entails. In the case of breast cancer, cessation of screening 
would lead to an increase in the size and stage of breast cancers at diagnosis, since most cancers would only be 
detected when they become symptomatic. 


Medical ultrasonography (Ultrasound) is a diagnostic aid to mammography. 

Breast MRI 

Magnetic resonance imaging (MRI) has been shown to detect cancers not visible on mammograms. The chief 
strength of breast MRI is its very high negative predictive value. A negative MRI can rule out the presence of cancer 
to a high degree of certainty, making it an excellent tool for screening in patients at high genetic risk or 
radiographically dense breasts, and for pre-treatment staging where the extent of disease is difficult to determine on 
mammography and ultrasound. MRI can diagnose benign proliferative change, fibroadenomas, and other common 
benign findings at a glance, often eliminating the need for costly and unnecessary biopsies or surgical procedures. 
The spatial and temporal resolution of breast MRI has increased markedly in recent years, making it possible to 
detect or rule out the presence of small in situ cancers, including ductal carcinoma in situ. 

However, breast MRI has long been regarded to have disadvantages. For example, although it is 27—36% more 


sensitive, it has been claimed to be less specific than mammography. . As a result, MRI studies may have more 
false positives (up to 30%), which may have undesirable financial and psychological costs. It is also a relatively 
expensive procedure, and one which requires the intravenous injection of gadolinium, which has been implicated in a 
rare reaction called nephrogenic systemic fibrosis. Although NSF is extremely uncommon, patients with a history of 
renal disease may not be able to undergo breast MRI. Further, an MRI may not be used for screening patients with a 
pacemaker or breast reconstruction patients with a tissue expander due to the presence of metal. 

Breast cancer screening 143 

Proposed indications for using MRI for screening include: 

• Strong family history of breast cancer 

• Patients with BRCA-1 or BRCA-2 oncogene mutations 

• Evaluation of women with breast implants 

• History of previous lumpectomy or breast biopsy surgeries 

• Axillary metastasis with an unknown primary tumor 

• Very dense or scarred breast tissue 

In addition, breast MRI may be helpful for screening in women who have had breast augmentation procedures 
involving intramammary injections of various foreign substances that may mask the appearances of breast cancer on 
mammography and/or ultrasound. These substances include: 

• Silicone oil 

• Polyacrylamide gel 

Two studies published in 2007 demonstrated the strengths of MRI-based screening: 

• In March 2007, an article published in the New England Journal of Medicine demonstrated that in 3. 1% of 
patients with breast cancer, whose contralateral breast was clinically and mammographically tumor-free, MRI 
could detect breast cancer. Sensitivity for detection of breast cancer in this study was 91%, specificity 88%. 

• In August 2007, an article published in The Lancet compared MRI breast cancer screening to conventional 
mammographic screening in 7,319 women. MRI screening was highly more sensitive (97% in the MRI group vs. 
56% in the mammography group) in recognizing early high-grade Ductal Carcinoma in situ (DCIS), the most 

important precursor of invasive carcinoma. Despite the high sensitivity, MRI screening had a positive predictive 

value of 52%, which is totally accepted for cancer screening tests. The author of a comment published in the 

same issue of The Lancet concludes that "MRI outperforms mammography in tumour detection and 

a- ■ -[23] 


Based on this evidence, and the lack of effective alternative methods for screening in young women of very high 
genetic risk (either an extremely strong first degree family history or proven BRCA1 or BRCA2 oncogene mutation 
carrier status) for breast cancer, the Australian federal government decided to routinely reimburse annual breast MRI 
scans for such women under the age of 50 from January 2009 onwards. 

BRCA testing 

A clinical practice guideline by the US Preventive Services Task Force : 

• "recommends against routine referral for genetic counseling or routine breast cancer susceptibility gene (BRCA) 
testing for women whose family history is not associated with an increased risk for deleterious mutations in breast 
cancer susceptibility gene 1 (BRCA1) or breast cancer susceptibility gene 2 (BRCA2)" The Task Force gave a 
grade D recommendation. 

• "recommends that women whose family history is associated with an increased risk for deleterious mutations in 
BRCA1 or BRCA2 genes be referred for genetic counseling and evaluation for BRCA testing." The Task Force 
gave a grade B recommendation. 

The Task Force noted that about 2% of women have family histories that indicate increased risk as defined by: 

• For non— Ashkenazi Jewish women, any of the following: 

"2 first-degree relatives with breast cancer, 1 of whom received the diagnosis at age 50 years or younger" 

"3 or more first- or second-degree relatives with breast cancer regardless of age at diagnosis" 

"both breast and ovarian cancer among first- and second- degree relatives" 

"a first-degree relative with bilateral breast cancer" 

"a combination of 2 or more first- or second-degree relatives with ovarian cancer regardless of age at 


Breast cancer screening 144 

• "a first- or second-degree relative with both breast and ovarian cancer at any age" 

• "a history of breast cancer in a male relative." 

• "For women of Ashkenazi Jewish heritage, an increased-risk family history includes any first-degree relative (or 2 
second-degree relatives on the same side of the family) with breast or ovarian cancer." 


• Gilberto Schwartsmann (2001) "Breast Cancer in South America: Challenges to improve early detection and 
medical management of a public health problem." J Clin Oncol 19 1 18-124 

[I] Kosters JP, G0tzsche PC (2003). "Regular self-examination or clinical examination for early detection of breast cancer". Cochrane Database 
SystRev (2): CD003373. doi:10.1002/14651858.CD003373. PMID 12804462. 

[2] G0tzsche PC, Nielsen M (2009). "Screening for breast cancer with mammography". Cochrane Database Syst Rev (4): CD001877. 

doi:10.1002/14651858.CD001877.pub3. PMID 19821284. 
[3] "Breast Cancer: Screening" ( United States Preventive Services Task Force. . 
[4] Schonberg M. Breast cancer screening: at what age to stop? ( 

Consultant. 2010;50(May): 196-205. 
[5] Saslow D, Hannan J, Osuch J, et al. (2004). "Clinical breast examination: practical recommendations for optimizing performance and 

reporting". CA Cancer J Clin 54 (6): 327-44. doi:10.3322/canjclin.54.6.327. PMID 15537576. 
[6] Nelson HD, Tyne K, Naik A, Bougatsos C, Chan BK, Humphrey L (November 2009). "Screening for breast cancer: an update for the U.S. 

Preventive Services Task Force". Ann. Intern. Med. 151 (10): 727-37, W237-42. doi:10.1059/0003-4819-151-10-200911 170-00009. 

PMID 19920273. 
[7] Kalager M, Zelen M, Langmark F, Adami HO (September 2010). "Effect of screening mammography on breast-cancer mortality in Norway". 

N. Engl. J. Med. 363 (13): 1203-10. doi:10.1056/NEJMoal000727. PMID 20860502. 
[8] US Preventive Services Task Force (November 2009). "Screening for breast cancer: U.S. Preventive Services Task Force recommendation 

statement". Ann. Intern. Med. 151 (10): 716-26, W-236. doi:10.1059/0003-4819-151-10-200911170-00008. PMID 19920272. 
[9] Armstrong K, Moye E, Williams S, Berlin JA, Reynolds EE (2007). "Screening mammography in women 40 to 49 years of age: a systematic 

review for the American College of Physicians". Ann. Intern. Med. 146 (7): 5 16-26. PMID 17404354. 
[10] Ward E, Halpern M, Schrag N, Cokkinides V, DeSantis C, Bandi P, Siegel R, Stewart A, Jemal A (Jan-February 2008). "Association of 

insurance with cancer care utilization and outcomes" ( CA Cancer J 

Clin 58 (1): 9. doi:10.3322/CA.2007.0011. PMID 18096863. . 

[II] Kaiser Family Foundation (December 31, 2006). "State Mandated Benefits: Cancer Screening for Women, 2006" (http://www. ?ind=488&cat=10&yr=17&typ=5). . 

[12] Canadian Cancer Society (August 10, 2007). "Breast cancer screening in your 40s" ( 

0,3182,3172_573785695_2026817819_langld-en,00.html). . 
[13] Canadian Cancer Society (April 2006). "Canadian Cancer Statistics, 2006" ( 

21/935505792cw_2006stats_en.pdf.pdf) (PDF). . 
[14] NHS Cancer Screening Programmes (2007). "NHS Breast Screening Programme" ( . 
[15] The Information Centre (NHS) (March 23, 2007). "Breast Screening Programme 2005/06" ( 

statistics-and-data-collections/screening/breast-cancer/breast-screening-programme-2005-06-[ns]?). . 
[16] "Women 'misjudge screening benefits'" ( BBC. 15 October 2001. . Retrieved 

[17] Smith-Bindman R, Ballard-Barbash R, Miglioretti DL, Patnick J, Kerlikowske K (2005). "Comparing the performance of mammography 

screening in the USA and the UK". Journal of medical screening 12 (1): 50-4. doi:10.1258/0969141053279130. PMID 15814020. 
[18] Croswell JM, Kramer BS, Kreimer AR, et al. (2009). "Cumulative incidence of false-positive results in repeated, multimodal cancer 

screening" ( Ann Fam Med 7 (3): 212—22. 

doi:10.1370/afm.942. PMID 19433838. PMC 2682972. 
[19] Hrung J, Sonnad S, Schwartz J, Langlotz C (1999). "Accuracy of MR imaging in the work-up of suspicious breast lesions: a diagnostic 

meta-analysis.". AcadRadiol 6 (7): 387-97. doi:10.1016/S1076-6332(99)80189-5. PMID 10410164. 
[20] Morrow M (2004). "Magnetic resonance imaging in breast cancer: one step forward, two steps back?". JAMA 292 (22): 2779—80. 

doi:10.1001/jama.292.22.2779. PMID 15585740. 
[21] Lehman CD, Gatsonis C, Kuhl CK, Hendrick RE, Pisano ED, Hanna L, Peacock S, Smazal SF, Maki DD, Julian TB, DePeri ER, Bluemke 

DA, Schnall MD (2007). "MRI evaluation of the contralateral breast in women with recently diagnosed breast cancer.". N Engl J Med. 356 

(13): 1295-1303. doi:10.1056/NEJMoa065447. PMID 17392300. 
[22] Kuhl CK, Schrading S, Bieling HB, Wardelmann E, Leutner CC, Koenig R, Kuhn W, Schild HH (2007). "MRI for diagnosis of pure ductal 

carcinoma in situ: a prospective observational study". The Lancet 370 (9586): 485^92. doi: 10.1016/S0140-6736(07)61232-X. 
[23] Boetes C, Mann RM (2007). "Ductal carcinoma in situ and breast MRI". The Lancet 370 (9586): 459^160. 


Breast cancer screening 


[24] U.S. Preventive Services Task Force (2002). "Screening for breast cancer: recommendations and rationale" ( 

content/full/137/5_Part_l/344). Ann. Intern. Med. 137 (5 Part 1): 344-6. PMID 12204019. . 
[25] "Guide to Clinical Preventive Services, Third Edition: Periodic Updates, 2000-2003" ( 

htm). Agency for Healthcare Research and Quality. US Preventive Services Task Force. . Retrieved 2007-10-07. 

External links 

• (, US non-profit information provider 

• Breast cancer (http://www.dmoz.Org//Health/Conditions_and_Diseases/Cancer/Breast//) at the Open 
Directory Project 

• Breast cancer screening page ( from the National 
Cancer Institute 

• Breast Cancer Screening ( from 

Fourier transform spectroscopy 

Fourier transform spectroscopy is a measurement technique whereby spectra are collected based on measurements 
of the coherence of a radiative source, using time-domain or space-domain measurements of the electromagnetic 
radiation or other type of radiation. It can be applied to a variety of types of spectroscopy including optical 
spectroscopy, infrared spectroscopy (FTIR, FT-NIRS), nuclear magnetic resonance (NMR) and magnetic resonance 
spectroscopic imaging (MRSI) , mass spectrometry and electron spin resonance spectroscopy. There are several 
methods for measuring the temporal coherence of the light (see: field-autocorrelation), including the continuous 
wave Michelson or Fourier transform spectrometer and the pulsed Fourier transform spectrograph (which is more 
sensitive and has a much shorter sampling time than conventional spectroscopic techniques, but is only applicable in 
a laboratory environment). 

The term Fourier transform spectroscopy reflects the fact that in all these techniques, a Fourier transform is required 
to turn the raw data into the actual spectrum, and in many of the cases in optics involving interferometers, is based 
on the Wiener— Khinchin theorem. 

Conceptual introduction 

Measuring an emission spectrum 

One of the most basic tasks in spectroscopy is to characterize the 
spectrum of a light source: How much light is emitted at each different 
wavelength. The most straightforward way to measure a spectrum is to 
pass the light through a monochromator, an instrument that blocks all 
of the light except the light at a certain wavelength (the un-blocked 
wavelength is set by a knob on the monochromator). Then the intensity 
of this remaining (single-wavelength) light is measured. The measured 
intensity directly indicates how much light is emitted at that 
wavelength. By varying the monochromator' s wavelength setting, the 
full spectrum can be measured. This simple scheme in fact describes 
how some spectrometers work. 


C2 C * 


OH \ ' 




300 400 500 600 700 

Wavelength / nm 

An example of a spectrum: The spectrum of light 
emitted by the blue flame of a butane torch. The 
horizontal axis is the wavelength of light, and the 
vertical axis represents how much light is emitted 
by the torch at that wavelength. 

Fourier transform spectroscopy 


Fourier transform spectroscopy is a less intuitive way to get the same information. Rather than allowing only one 
wavelength at a time to pass through to the detector, this technique lets through a beam containing many different 
wavelengths of light at once, and measures the total beam intensity. Next, the beam is modified to contain a different 
combination of wavelengths, giving a second data point. This process is repeated many times. Afterwards, a 
computer takes all this data and works backwards to infer how much light there is at each wavelength. 

To be more specific, between the light source and the detector, there is a certain configuration of mirrors that allows 
some wavelengths to pass through but blocks others (due to wave interference). The beam is modified for each new 
data point by moving one of the mirrors; this changes the set of wavelengths that can pass through. 

As mentioned, computer processing is required to turn the raw data (light intensity for each mirror position) into the 
desired result (light intensity for each wavelength). The processing required turns out to be a common algorithm 
called the Fourier transform (hence the name, "Fourier transform spectroscopy"). The raw data is sometimes called 
an "interferogram". 

Measuring an absorption spectrum 

The method of Fourier transform spectroscopy can also be used for 
absorption spectroscopy. The primary example is "FTIR 
Spectroscopy", a common technique in chemistry. 

In general, the goal of absorption spectroscopy is to measure how well 
a sample absorbs or transmits light at each different wavelength. 
Although absorption spectroscopy and emission spectroscopy are 
different in principle, they are closely related in practice; any technique 
for emission spectroscopy can also be used for absorption 
spectroscopy. First, the emission spectrum of a broadband lamp is 
measured (this is called the "background spectrum"). Second, the 
emission spectrum of the same lamp shining through the sample is 
measured (this is called the "sample spectrum"). The sample will 
absorb some of the light, causing the spectra to be different. The ratio 
of the "sample spectrum" to the "background spectrum" is directly related to the sample's absorption spectrum. 

Accordingly, the technique of "Fourier transform spectroscopy" can be used both for measuring emission spectra (for 
example, the emission spectrum of a star), and absorption spectra (for example, the absorption spectrum of a glass of 

10720 10700 166S6 10600 10640 10626 

relative Oitskocrainste rjes oewegiichen Spiegele 

An "interferogram" from a Fourier transform 

spectrometer. The horizontal axis is the position 

of the mirror, and the vertical axis is the amount 

of light detected. This is the "raw data" which can 

be Fourier transformed into an actual spectrum. 

Fourier transform spectroscopy 


light source 

Continuous wave Michelson or Fourier transform spectrograph 

The Michelson spectrograph is similar to the 

instrument used in the Michelson-Morley experiment. 

Light from the source is split into two beams by a 

half-silvered mirror, one is reflected off a fixed mirror 

and one off a moving mirror which introduces a time 

delay — the Fourier transform spectrometer is just a 

Michelson interferometer with a movable mirror. The 

beams interfere, allowing the temporal coherence of the 

light to be measured at each different time delay 

setting, effectively converting the time domain into a 

spatial coordinate. By making measurements of the 

signal at many discrete positions of the moving mirror, 

the spectrum can be reconstructed using a Fourier 

transform of the temporal coherence of the light. 

Michelson spectrographs are capable of very high 

spectral resolution observations of very bright sources. 

The Michelson or Fourier transform spectrograph was 

popular for infra-red applications at a time when 

infra-red astronomy only had single pixel detectors. 

Imaging Michelson spectrometers are a possibility, but in general have been supplanted by imaging Fabry— Perot 

instruments which are easier to construct. 

The Fourier transform spectrometer is just a Michelson 

interferometer but one of the two fully-reflecting mirrors is movable, 

allowing a variable delay (in the travel-time of the light) to be 

included in one of the beams. 

Extracting the spectrum 

The intensity as a function of the path length difference in the interferometer pand wavenumber u = 1 1\ is 

J(p t v) = I(v)[l + cos(27rf>p}] , 
where 1(D) is the spectrum to be determined. Note that it is not necessary for 1(D) to be modulated by the sample 
before the interferometer. In fact, most FTIR spectrometers place the sample after the interferometer in the optical 
path. The total intensity at the detector is 

I(p) = I(p,D)dD = I(D)[l + cos(2irDp)]dD. 
Jo Jo 

This is just a Fourier cosine transform. The inverse gives us our desired result in terms of the measured quantity 


1(D) = 4 / [/(p) - i/(p = 0)] cos(27rDp)dp. 

Fourier transform spectroscopy 148 

Pulsed Fourier transform spectrometer 

A pulsed Fourier transform spectrometer does not employ transmittance techniques. In the most general description 
of pulsed FT spectrometry, a sample is exposed to an energizing event which causes a periodic response. The 
frequency of the periodic response, as governed by the field conditions in the spectrometer, is indicative of the 
measured properties of the analyte. 

Examples of pulsed Fourier transform spectrometry 

In magnetic spectroscopy (EPR, NMR), an RF pulse in a strong ambient magnetic field is used as the energizing 
event. This turns the magnetic particles at an angle to the ambient field, resulting in gyration. The gyrating spins then 
induce a periodic current in a detector coil. Each spin exhibits a characteristic frequency of gyration (relative to the 
field strength) which reveals information about the analyte. 

In Fourier transform mass spectrometry, the energizing event is the injection of the charged sample into the strong 
electromagnetic field of a cyclotron. These particles travel in circles, inducing a current in a fixed coil on one point 
in their circle. Each traveling particle exhibits a characteristic cyclotron frequency-field ratio revealing the masses in 
the sample. 

Free induction decay 

Pulsed FT spectrometry gives the advantage of requiring a single, time-dependent measurement which can easily 
deconvolute a set of similar but distinct signals. The resulting composite signal, is called a free induction decay, 
because typically the signal will decay due to inhomogeneities in sample frequency, or simply unrecoverable loss of 
signal due to entropic loss of the property being measured. 

Stationary forms of Fourier transform spectrometers 

In addition to the scanning forms of Fourier transform spectrometers, there are a number of stationary or 
self-scanned forms. While the analysis of the interferometric output is similar to that of the typical scanning 
interferometer, significant differences apply, as shown in the published analyses. Some stationary forms retain the 
Fellgett multiplex advantage, and their use in the spectral region where detector noise limits apply is similar to the 
scanning forms of the FTS. In the photon-noise limited region, the application of stationary interferometers is 
dictated by specific consideration for the spectral region and the application. 

Fellgett advantage 

One of the most important advantages of Fourier transform spectroscopy was shown by P.B. Fellgett, an early 
advocate of the method. The Fellgett advantage, also known as the multiplex principle, states that when obtaining a 
spectrum when measurement noise is dominated by detector noise, a multiplex spectrometer such as a Fourier 
transform spectrometer will produce a relative improvement in signal-to-noise ratio, compared to an equivalent 
scanning monochromator, of the order of the square root of m, where m is the number of sample points comprising 
the spectrum. 

Fourier transform spectroscopy 149 

Converting spectra from time domain to frequency domain 

I(v)e- ilj2lTt dv 

The sum is performed over all contributing frequencies to give a signal S(t) in the time domain. 

s(ty" 27rt dt 

gives non-zero value when S(t) contains a component that matches the oscillating function. 
Remember that 

e lx = cos x + i sin x 

See also 

• Applied spectroscopy 

• Forensic chemistry 

• Forensic polymer engineering 

• Nuclear Magnetic Resonance 

• Infrared spectroscopy 


[1] Antoine Abragam. 1968. Principles of Nuclear Magnetic Resonance., Cambridge University Press: Cambridge, UK. 
[2] Peter Atkins, Julio De Paula. 2006. Physical Chemistry, 8th ed. Oxford University Press: Oxford, UK. 

[3] William H. Smith U.S. Patent 4976542 (http://www. ?vid=4976542) Digital Array Scanned Interferometer, issued Dec. 
11, 1990 

External links 

• Description of how a Fourier transform spectrometer works ( 

• The Michelson or Fourier transform spectrograph ( 

• Internet Journal of Vibrational Spectroscopy - How FTIR works ( 
sectionl .html#Feature) 

• Fourier Transform Spectroscopy Topical Meeting and Tabletop Exhibit ( 

FT-Near Infrared Spectroscopy and Imaging 


FT-Near Infrared Spectroscopy and Imaging 

Fourier transform infrared spectroscopy (FTIR) is a technique which is used to obtain an infrared spectrum of 
absorption, emission, photoconductivity or Raman scattering of a solid, liquid or gas. An FTIR spectrometer 
simultaneously collects spectral data in a wide spectral range. This confers a significant advantage over a dispersive 
spectrometer which measures intensity over a narrow range of wavelengths at a time. FTIR technique has made 
dispersive infrared spectrometers all but obsolete (except sometimes in the near infrared) and opened up new 
applications of infrared spectroscopy. 

The term Fourier transform infrared spectroscopy originates from the fact that a Fourier transform (a mathematical 
algorithm) is required to convert the raw data into the actual spectrum. For other uses of this kind of technique, see 
Fourier transform spectroscopy. 

Conceptual introduction 

The goal of any absorption spectroscopy (FTIR, ultraviolet-visible 
("UV-Vis") spectroscopy, etc.) is to measure how well a sample 
absorbs light at each wavelength. The most straightforward way to do 
this, the "dispersive spectroscopy" technique, is to shine a 
monochromatic light beam at a sample, measure how much of the light 
is absorbed, and repeat for each different wavelength. (This is how 
UV-Vis spectrometers work, for example.) 

10720 10700 10BS0 10600 10B40 10E 

relative Ortekcjrdinyte fes beweglichsn Spiegels 

An interferogram from an FTIR spectrometer. 

The horizontal axis is the position of the mirror, 

and the vertical axis is the amount of light 

detected. This is the "raw data" which can be 

transformed into an actual spectrum. 

Fourier transform spectroscopy is a less intuitive way to obtain the 

same information. Rather than shining a monochromatic beam of light 

at the sample, this technique shines a beam containing many different 

frequencies of light at once, and measures how much of that beam is 

absorbed by the sample. Next, the beam is modified to contain a 

different combination of frequencies, giving a second data point. This 

process is repeated many times. Afterwards, a computer takes all this data and works backwards to infer what the 

absorption is at each wavelength. 

The beam described above is generated by starting with a broadband light source — one containing the full spectrum 
of wavelengths to be measured. The light shines into a certain configuration of mirrors, called a Michelson 
interferometer, that allows some wavelengths to pass through but blocks others (due to wave interference). The beam 
is modified for each new data point by moving one of the mirrors; this changes the set of wavelengths that pass 

As mentioned, computer processing is required to turn the raw data (light absorption for each mirror position) into 
the desired result (light absorption for each wavelength). The processing required turns out to be a common 
algorithm called the Fourier transform (hence the name, "Fourier transform spectroscopy"). The raw data is 
sometimes called an "interferogram". 

FT-Near Infrared Spectroscopy and Imaging 


Fixed mirror 
i . i 

Bearr splitter Moving mirror 

Source —Collimator 



Schematic diagram of a Michelson interferometer, configured for FTIR 

Michelson interferometer 

In a Michelson interferometer adapted 

for FTIR, light from the polychromatic 

infrared source, approximately a 

black-body radiator, is collimated and 

directed to a beam splitter. Ideally 50% 

of the light is reflected towards the 

fixed mirror and 50% is transmitted 

towards the moving mirror. Light is 

reflected from the two mirrors back to 

the beam splitter and (ideally) 50% of 

the original light passes into the 

sample compartment. There, the light 

is focussed on the sample. On leaving 

the sample compartment the light is 

refocused on to the detector. The 

difference in optical path length between the two arms to the interferometer is known as the retardation. An 

interferogram is obtained by varying the retardation and recording the signal from the detector for various values of 

the retardation. The form of the interferogram when no sample is present depends on factors such as the variation of 

source intensity and splitter efficiency with wavelength. This results in a maximum at zero retardation, when there is 

constructive interference at all wavelengths, followed by series of "wiggles". The position of zero retardation is 

determined accurately by finding the point of maximum intensity in the interferogram. When a sample is present the 

background interferogram is modulated by the presence of absorption bands in the sample. 

There are two principle advantages for a FT spectrometer compared to a scanning (dispersive) spectrometer. 

1. The multiplex or Fellgett's advantage. This arises from the fact that information from all wavelengths is collected 
simultaneously. It results in a higher Signal-to-noise ratio for a given scan-time or a shorter scan-time for a given 

2. The throughput or Jacquinot's advantage. This results from the fact that, in a dispersive instrument, the 
monochromator has entrance and exit slits which restrict the amount of light that passes through it. The 

interferometer throughput is determined only by the diameter of the collimated beam coming from the source. 

Other minor advantages include less sensitivity to stray light, and "Connes' advantage" (better wavelength 


accuracy) , while a disadvantage is that FTIR cannot use the advanced electronic filtering techniques that often 

makes its signal-to-noise ratio inferior to that of dispersive measurements 



The interferogram belongs in the length domain. Fourier transform (FT) inverts the dimension, so the FT of the 
interferogram belongs in the reciprocal length domain, that is the wavenumber domain. The spectral resolution in 
wavenumbers per cm is equal to the reciprocal of the maximum retardation in cm. Thus a 4 cm - resolution will be 
obtained if the maximum retardation is 0.25 cm; this is typical of the cheaper FTIR instruments. Much higher 
resolution can be obtained by increasing the maximum retardation. This is not easy as the moving mirror must travel 
in a near-perfect straight line. The use of corner-cube mirrors in place of the flat mirrors is helpful as an outgoing ray 
from a corner-cube mirror is parallel to the incoming ray, regardless of the orientation of the mirror about axes 

perpendicular to the axis of the light beam. Connes measured in 1966 the temperature of the atmosphere of Venus by 

—1 T41 

recording the vibration-rotation spectrum of Venusian CO at 0.1 cm resolution. Michelson himself attempted to 

resolve the hydrogen H emission band in the spectrum of a hydrogen atom into its two components by using his 

FT-Near Infrared Spectroscopy and Imaging 152 

interferometer. p A spectrometer with 0.001 cm - resolution is now available commercially from Bruker. The 
throughput advantage is important for high-resolution FTIR as the monochromator in a dispersive instrument with 
the same resolution would have very narrow entrance and exit slits. 

Beam splitter 

The beam-splitter can not be made of a common glass, as it is opaque to infrared radiation of wavelengths longer 
than about 2.5 |tm. A thin film, usually of a plastic material, is used instead. However, as any material has a limited 
range of optical transmittance, several beam-splitters are used interchangeably to cover a wide spectral range. 

Fourier transform 

The interferogram in practice consists of a set of intensities measured for discrete values of retardation. The 
difference between successive retardation values is constant. Thus, a discrete Fourier transform is needed. The fast 
Fourier transform (FFT) algorithm is used. 

Far-infrared FTIR 

The first FTIR spectrometers were developed for far-infrared range. The reason for this has to do with the 
mechanical tolerance needed for good optical performance, which is related to the wavelength of the light being 
used. For the relatively long wavelengths of the far infrared (-10 um), tolerances are adequate, whereas for the 
rock-salt region tolerances have to be better than 1 [«n. A typical instrument was the cube interferometer developed 
at the NPL and marketed by Grubb Parsons. It used a stepper motor to drive the moving mirror, recording the 
detector response after each step was completed. 

Mid-infrared FTIR 

With the advent of cheap microcomputers it became possible to have a computer dedicated to controlling the 
spectrometer, collecting the data, doing the Fourier transform and presenting the spectrum. This provided the 
impetus for the development of FTIR spectrometers for the rock-salt region. The problems of manufacturing 
ultra-high precision optical and mechanical components had to be solved. A wide range of instrument is now 
available commercially. Although instrument design has become more sophisticated, the basic principles remain the 
same. Nowadays, the moving mirror of the interferrometer moves at a constant velocity, and sampling of the 
interferogram is triggered by finding zero-crossings in the fringes of a secondary interferometer lit by a helium-neon 
laser. This confers high wavenumber accuracy on the resulting infrared spectrum and avoids wavenumber calibration 

Near-infrared FTIR 

The near-infrared region spans the wavelength range between the rock-salt region and the start of the visible region 
at about 750 nm. Overtones of fundamental vibrations can be observed in this region. It is used mainly in industrial 
applications such as process control. 


FTIR can be used in all applications where a dispersive spectrometer was used in the past (see external links). In 
addition, the multiplex and throughput advantages have opened up new areas of application. These include: 

• GC-IR (gas chromatography-infrared spectrometry). A gas chromatograph can be used to separate the 

components of a mixture. The fractions containing single components are directed into an FTIR spectrometer, to 
provide the infrared spectrum of the sample. This technique is complementary to GC-MS (gas 

FT-Near Infrared Spectroscopy and Imaging 153 

chromatography-mass spectrometry). The GC-IR method is particularly useful for identifying isomers, which by 
their nature have identical masses. The key to the successful use of GC-IR is that the interferogram can be 

captured in a very short time, typically less than 1 second. FTIR has also been applied to the analysis of liquid 

chromatography fractions. 

• TG-IR (thermogravimetry-infrared spectrometry) IR spectra of the gases evolved during thermal decompostion 
are obtained as a function of temperature. 

• Micro-samples. Tiny samples, such as in forensic analysis, can be examined with the aid of an infrared 

microscope in the sample chamber. An image of the surface can be obtained by scanning. Another example is 


the use of FTIR to characterize artistic materials in old-master paintings. 

• Emission spectra. Instead of recording the spectrum of light transmitted through the sample, FTIR spectrometer 
can be used to acquire spectrum of light emitted by the sample. Such emission could be induced by various 
processes, and the most common ones are luminescence and Raman scattering. Little modification is required to 
an absorption FTIR spectrometer to record emission spectra and therefore many commercial FTIR spectrometers 
combine both absorption and emission/Raman modes. 

• Photocurrent spectra. This mode uses a standard, absorption FTIR spectrometer. The studied sample is placed 
instead of the FTIR detector, and its photocurrent, induced by the spectrometer's broadband source, is used to 
record the interferrogram, which is then converted into the photoconductivity spectrum of the sample. 


[1] Griffiths, P.; de Hasseth, J.A. (18 May 2007). Fourier Transform Infrared Spectrometry ( 

printsec=frontcover) (2nd ed.). Wiley-Blackwell. ISBN 0471194042. . 
[2] Banwell, C.N.; McCash, E.M. (1994). Fundamentals of Molecular Spectroscopy (4th ed.). McGraw-Hill. ISBN 0-07-707976-0. 
[3] Robert White (1990). Chromatography/Fourier transform infrared spectroscopy and its applications ( 

?id=t2VSNnFo03wC&pg=PA7). Marcel Dekker. ISBN 0824781910. . 
[4] Connes, J.; Connes, P. (1966). "Near-Infrared Planetary Spectra by Fourier Spectroscopy. I. Instruments and Results". Journal of the Optical 

Society of America 56 (7): 896-910. doi:10.1364/JOSA.56.000896. 
[5] Chamberain, J.; GibbsJ.E.; Gebbie, H.E. (1969). "The determination of refractive index spectra by fourier spectrometry". Infrared Physics 9: 

189-209. doi:10.1016/0020-0891(69)90023-2. 
[6] Nishikida, K.; Nishio, E.; Hannah, R.W. (1995). Selected applications ofFT-IR techniques (http://books. google. com/?id=Bjj7wSEP21sC& 

pg=PA240). Gordon and Breach, p. 240. ISBN 2884490736. . 
[7] Beauchaine, J. P.; Peterman, J.W.; Rosenthal, R. J. (1988). "Applications of FT-IR/microscopy in forensic analysis". Microchimica Acta 94: 

133-138. doi:10.1007/BF01205855. 
[8] Prati, S.; Joseph, E.; Sciutto, G; Mazzeo, R. (2010). "New Advances in the Application of FTIR Microscopy and Spectroscopy for the 

Characterization of Artistic Materials". Ace. Chem. Res. 43 (6): 792-801. doi:10.1021/ar900274f. PMID 20476733. 
[9] Michael Gaft, Renata Reisfeld, Gerard Panczer (2005). Luminescence spectroscopy of minerals and materials ( 

?id=QBoTvW_hlFQC&pg=PA263). Springer, p. 263. ISBN 3540219188. . 
[10] Jef Poortmans, Vladimir Arkhipov (2006). Thin film solar cells: fabrication, characterization and applications ( 

?id=SvVYBK6YAxAC&pg=PA189). John Wiley and Sons. p. 189. ISBN 0470091266. . 

External links 

• Infracord spectrometer ( 
pdf) photograph 

• The Grubb-Parsons-NPL cube interferometer Spectroscopy, part 2 by Dudley Williams, page 81 (http://books. 
sa=X&oi=book_result&ct=result&resnum=9&ved=0CDgQ6AEwCA#v=onepage&q=grubb parsons cube& 

• FTIR application notes (http://las. htm?expand=Application& 
type=CATEGORY) from Perkin Elmer 

FT-Near Infrared Spectroscopy and Imaging 154 

• FTIR application notes ( 
from Varian 

• Infrared / FTIR Application Notes ( Recent 

• Semiconductor applications ( 
SemiconductorApplOverview.pdf) FTIR Sampling Techniques Overview. 

• infrared materials ( Properties of many salt 
crystals and useful links. 

Chemical imaging 

Chemical imaging (as quantitative - chemical mapping) is the analytical capability to create a visual image of 
components distribution from simultaneous measurement of spectra and spatial, time informations. 

The main idea - for chemical imaging, the analyst may choose to take as many data spectrum measured at a 
particular chemical component in spatial location at time; this is useful for chemical identification and quantification. 
Alternatively, selecting an image plane at a particular data spectrum (PCA - multivariate data of wavelength, spatial 
location at time) can mapp the spatial distribution of sample components, provided that their spectral signatures are 
different at the selected data spectrum. 

Software for chemical imaging is most specific and distinguished from chemical methods as the chemometrics. 

Imaging technique is most often applied to either solid or gel samples, and has applications in chemistry, biology 

, medicine , pharmacy (see also for example: Chemical Imaging Without Dyeing ), food 

science, biotechnology , agriculture and industry (see for example:NIR Chemical Imaging in Pharmaceutical 

Industry and Pharmaceutical Process Analytical Technology: ). NIR, IR and Raman chemical imaging is also 
referred to as hyperspectral, spectroscopic, spectral or multispectral imaging (also see microspectroscopy). However, 
other ultra-sensitive and selective imaging techniques are also in use that involve either UV-visible or fluorescence 
microspectroscopy. Many imaging techniques can be used to analyze samples of all sizes, from the single 
molecule to the cellular level in biology and medicine , and to images of planetary systems in 

astronomy, but different instrumentation is employed for making observations on such widely different systems. 

Imaging instrumentation is composed of three components: a radiation source to illuminate the sample, a spectrally 
selective element, and usually a detector array (the camera) to collect the images. When many stacked spectral 
channels (wavelengths) are collected for different locations of the microspectrometer focus on a line or planar array 
in the focal plane, the data is called hyperspectral; fewer wavelength data sets are called multispectral. The data 
format is called a hypercube. The data set may be visualized as a three-dimensional block of data spanning two 
spatial dimensions (x and y), with a series of wavelengths (lambda) making up the third (spectral) axis. The 
hypercube can be visually and mathematically treated as a series of spectrally resolved images (each image plane 
corresponding to the image at one wavelength) or a series of spatially resolved spectra. 

Many materials, both manufactured and naturally occurring, derive their functionality from the spatial distribution of 
sample components. For example, extended release pharmaceutical formulations can be achieved by using a coating 
that acts as a barrier layer. The release of active ingredient is controlled by the presence of this barrier, and 
imperfections in the coating, such as discontinuities, may result in altered performance. In the semi-conductor 
industry, irregularities or contaminants in silicon wafers or printed micro-circuits can lead to failure of these 
components. The functionality of biological systems is also dependent upon chemical gradients — a single cell, 
tissue, and even whole organs function because of the very specific arrangement of components. It has been shown 
that even small changes in chemical composition and distribution may be an early indicator of disease. 

Chemical imaging 155 

Any material that depends on chemical gradients for functionality may be amenable to study by an analytical 
technique that couples spatial and chemical characterization. To efficiently and effectively design and manufacture 
such materials, the 'what' and the 'where' must both be measured. The demand for this type of analysis is increasing 
as manufactured materials become more complex. Chemical imaging techniques is critical to understanding modern 
manufactured products and in some casses is a non-destructive technique so that samples are preserved for further 


Commercially available laboratory-based chemical imaging systems emerged in the early 1990s (ref. 1-5). In 
addition to economic factors, such as the need for sophisticated electronics and extremely high-end computers, a 
significant barrier to commercialization of infrared imaging was that the focal plane array (FPA) needed to read IR 
images were not readily available as commercial items. As high-speed electronics and sophisticated computers 
became more commonplace, and infrared cameras became readily commercially available, laboratory chemical 
imaging systems were introduced. 

Initially used for novel research in specialized laboratories, chemical imaging became a more commonplace 
analytical technique used for general R&D, quality assurance (QA) and quality control (QC) in less than a decade. 
The rapid acceptance of the technology in a variety of industries (pharmaceutical, polymers, semiconductors, 
security, forensics and agriculture) rests in the wealth of information characterizing both chemical composition and 
morphology. The parallel nature of chemical imaging data makes it possible to analyze multiple samples 
simultaneously for applications that require high throughput analysis in addition to characterizing a single sample. 


Chemical imaging shares the fundamentals of vibrational spectroscopic techniques, but provides additional 
information by way of the simultaneous acquisition of spatially resolved spectra. It combines the advantages of 
digital imaging with the attributes of spectroscopic measurements. Briefly, vibrational spectroscopy measures the 
interaction of light with matter. Photons that interact with a sample are either absorbed or scattered; photons of 
specific energy are absorbed, and the pattern of absorption provides information, or a fingerprint, on the molecules 
that are present in the sample. 

On the other hand, in terms of the observation setup, chemical imaging can be carried out in one of the following 
modes: (optical) absorption, emission (fluorescence), (optical) transmission or scattering (Raman). A consensus 
currently exists that the fluorescence (emission) and Raman scattering modes are the most sensitive and powerful, 
but also the most expensive. 

In a transmission measurement, the radiation goes through a sample and is measured by a detector placed on the far 
side of the sample. The energy transferred from the incoming radiation to the molecule(s) can be calculated as the 
difference between the quantity of photons that were emitted by the source and the quantity that is measured by the 
detector. In a diffuse reflectance measurement, the same energy difference measurement is made, but the source and 
detector are located on the same side of the sample, and the photons that are measured have re-emerged from the 
illuminated side of the sample rather than passed through it. The energy may be measured at one or multiple 
wavelengths; when a series of measurements are made, the response curve is called a spectrum. 

A key element in acquiring spectra is that the radiation must somehow be energy selected — either before or after 
interacting with the sample. Wavelength selection can be accomplished with a fixed filter, tunable filter, 
spectrograph, an interferometer, or other devices. For a fixed filter approach, it is not efficient to collect a significant 
number of wavelengths, and multispectral data are usually collected. Interferometer-based chemical imaging requires 
that entire spectral ranges be collected, and therefore results in hyperspectral data. Tunable filters have the flexibility 
to provide either multi- or hyperspectral data, depending on analytical requirements. 

Chemical imaging 156 

Spectra may be measured one point at a time using a single element detector (single-point mapping), as a line-image 
using a linear array detector (typically 16 to 28 pixels) (linear array mapping), or as a two-dimensional image using a 
Focal Plane Array (FPA)(typically 256 to 16,384 pixels) (FPA imaging). For single-point the sample is moved in the 
x and y directions point-by-point using a computer-controlled stage. With linear array mapping, the sample is moved 
line-by-line with a computer-controlled stage. FPA imaging data are collected with a two-dimensional FPA detector, 
hence capturing the full desired field-of-view at one time for each individual wavelength, without having to move 
the sample. FPA imaging, with its ability to collected tens of thousands of spectra simultaneously is orders of 
magnitude faster than linear arrays which are can typically collect 16 to 28 spectra simultaneously, which are in turn 
much faster than single-point mapping. 


Some words common in spectroscopy, optical microscopy and photography have been adapted or their scope 
modified for their use in chemical imaging. They include: resolution, field of view and magnification. There are two 
types of resolution in chemical imaging. The spectral resolution refers to the ability to resolve small energy 
differences; it applies to the spectral axis. The spatial resolution is the minimum distance between two objects that is 
required for them to be detected as distinct objects. The spatial resolution is influenced by the field of view, a 
physical measure of the size of the area probed by the analysis. In imaging, the field of view is a product of the 
magnification and the number of pixels in the detector array. The magnification is a ratio of the physical area of the 
detector array divided by the area of the sample field of view. Higher magnifications for the same detector image a 
smaller area of the sample. 

Types of vibrational chemical imaging instruments 

Chemical imaging has been implemented for mid-infrared, near-infrared spectroscopy and Raman spectroscopy. As 
with their bulk spectroscopy counterparts, each imaging technique has particular strengths and weaknesses, and are 
best suited to fulfill different needs. 

Mid-infrared chemical imaging 

Mid-infrared (MIR) spectroscopy probes fundamental molecular vibrations, which arise in the spectral range 
2,500-25,000 nm. Commercial imaging implementations in the MIR region typically employ Fourier Transform 
Infrared (FT-IR) interferometers and the range is more commonly presented in wavenumber, 4,000 — 400 cm" . The 
MIR absorption bands tend to be relatively narrow and well-resolved; direct spectral interpretation is often possible 
by an experienced spectroscopist. MIR spectroscopy can distinguish subtle changes in chemistry and structure, and is 
often used for the identification of unknown materials. The absorptions in this spectral range are relatively strong; 
for this reason, sample presentation is important to limit the amount of material interacting with the incoming 
radiation in the MIR region. Most data collected in this range is collected in transmission mode through thin sections 
(-10 micrometres) of material. Water is a very strong absorber of MIR radiation and wet samples often require 
advanced sampling procedures (such as attenuated total reflectance). Commercial instruments include point and line 
mapping, and imaging. All employ an FT-IR interferometer as wavelength selective element and light source. 

Chemical imaging 


For types of MIR microscope, see 
Microscopy#Infrared microscopy. 

Atmospheric windows in the infrared 

spectrum are also employed to perform 

chemical imaging remotely. In these spectral 

regions the atmospheric gases (mainly water 

and CO ) present low absorption and allow 

infrared viewing over kilometer distances. 

Target molecules can then be viewed using 

the selective absorption/emission processes 

described above. An example of the chemical imaging of a simultaneous release of SF and NH is shown in the 


Remote chemical imaging of a simultaneous release of SF and NH at 1.5km using 

the FIRST imaging spectrometer 


Near-infrared chemical imaging 

The analytical near infrared (NIR) region spans the range from approximately 700-2,500 nm. The absorption bands 
seen in this spectral range arise from overtones and combination bands of O-H, N-H, C-H and S-H stretching and 
bending vibrations. Absorption is one to two orders of magnitude smaller in the NIR compared to the MIR; this 
phenomenon eliminates the need for extensive sample preparation. Thick and thin samples can be analyzed without 
any sample preparation, it is possible to acquire NIR chemical images through some packaging materials, and the 
technique can be used to examine hydrated samples, within limits. Intact samples can be imaged in transmittance or 
diffuse reflectance. 

The lineshapes for overtone and combination bands tend to be much broader and more overlapped than for the 
fundamental bands seen in the MIR. Often, multivariate methods are used to separate spectral signatures of sample 
components. NIR chemical imaging is particularly useful for performing rapid, reproducible and non-destructive 

[231 [24] 

analyses of known materials . NIR imaging instruments are typically based on one of two platforms: imaging 

using a tunable filter and broad band illumination, and line mapping employing an FT-IR interferometer as the 
wavelength filter and light source. 

Raman chemical imaging 

The Raman shift chemical imaging spectral range spans from approximately 50 to 4,000 cm" ; the actual spectral 
range over which a particular Raman measurement is made is a function of the laser excitation frequency. The basic 
principle behind Raman spectroscopy differs from the MIR and NIR in that the x-axis of the Raman spectrum is 
measured as a function of energy shift (in cm" ) relative to the frequency of the laser used as the source of radiation. 
Briefly, the Raman spectrum arises from inelastic scattering of incident photons, which requires a change in 
polarizability with vibration, as opposed to infrared absorption, which requires a change in dipole moment with 
vibration. The end result is spectral information that is similar and in many cases complementary to the MIR. The 


Raman effect is weak - only about one in 10 photons incident to the sample undergoes Raman scattering. Both 
organic and inorganic materials possess a Raman spectrum; they generally produce sharp bands that are chemically 
specific. Fluorescence is a competing phenomenon and, depending on the sample, can overwhelm the Raman signal, 
for both bulk spectroscopy and imaging implementations. 

Raman chemical imaging requires little or no sample preparation. However, physical sample sectioning may be used 
to expose the surface of interest, with care taken to obtain a surface that is as flat as possible. The conditions required 
for a particular measurement dictate the level of invasiveness of the technique, and samples that are sensitive to high 
power laser radiation may be damaged during analysis. It is relatively insensitive to the presence of water in the 
sample and is therefore useful for imaging samples that contain water such as biological material. 

Chemical imaging 158 

Fluorescence imaging (visible and NIR) 

This emission microspectroscopy mode is the most sensitive in both visible and FT-NIR microspectroscopy, and has 
therefore numerous biomedical, biotechnological and agricultural applications. There are several powerful, highly 
specific and sensitive fluorescence techniques that are currently in use, or still being developed; among the former 
are FLIM, FRAP, FRET and FLIM-FRET; among the latter are NIR fluorescence and probe-sensitivity enhanced 
NIR fluorescence microspectroscopy and nanospectroscopy techniques (see "Further reading" section). 

Sampling and samples 

The value of imaging lies in the ability to resolve spatial heterogeneities in solid-state or gel/gel-like samples. 
Imaging a liquid or even a suspension has limited use as constant sample motion serves to average spatial 
information, unless ultra-fast recording techniques are employed as in fluorescence correlation microspectroscopy or 
FLIM obsevations where a single molecule may be monitored at extremely high (photon) detection speed. 
High-throughput experiments (such as imaging multi-well plates) of liquid samples can however provide valuable 
information. In this case, the parallel acquisition of thousands of spectra can be used to compare differences between 
samples, rather than the more common implementation of exploring spatial heterogeneity within a single sample. 

Similarly, there is no benefit in imaging a truly homogeneous sample, as a single point spectrometer will generate 
the same spectral information. Of course the definition of homogeneity is dependent on the spatial resolution of the 
imaging system employed. For MIR imaging, where wavelengths span from 3-10 micrometres, objects on the order 
of 5 micrometres may theoretically be resolved. The sampled areas are limited by current experimental 
implementations because illumination is provided by the interferometer. Raman imaging may be able to resolve 
particles less than 1 micrometre in size, but the sample area that can be illuminated is severely limited. With Raman 
imaging, it is considered impractical to image large areas and, consequently, large samples. FT-NIR 
chemical/hyperspectral imaging usually resolves only larger objects (>10 micrometres), and is better suited for large 

samples because illumination sources are readily available. However, FT-NIR microspectroscopy was recently 

reported to be capable of about 1.2 micron (micrometer) resolution in biological samples Furthermore, 

two-photon excitation FCS experiments were reported to have attained 15 nanometer resolution on biomembrane 

thin films with a special coincidence photon-counting setup. 

Detection limit 

The concept of the detection limit for chemical imaging is quite different than for bulk spectroscopy, as it depends 
on the sample itself. Because a bulk spectrum represents an average of the materials present, the spectral signatures 
of trace components are simply overwhelmed by dilution. In imaging however, each pixel has a corresponding 
spectrum. If the physical size of the trace contaminant is on the order of the pixel size imaged on the sample, its 
spectral signature will likely be detectable. If however, the trace component is dispersed homogeneously (relative to 
pixel image size) throughout a sample, it will not be detectable. Therefore, detection limits of chemical imaging 
techniques are strongly influenced by particle size, the chemical and spatial heterogeneity of the sample, and the 
spatial resolution of the image. 

Data analysis 

Data analysis methods for chemical imaging data sets typically employ mathematical algorithms common to single 
point spectroscopy or to image analysis. The reasoning is that the spectrum acquired by each detector is equivalent to 
a single point spectrum; therefore pre-processing, chemometrics and pattern recognition techniques are utilized with 
the similar goal to separate chemical and physical effects and perform a qualitative or quantitative characterization of 
individual sample components. In the spatial dimension, each chemical image is equivalent to a digital image and 
standard image analysis and robust statistical analysis can be used for feature extraction. 

Chemical imaging 159 

See also 

• Multispectral image 

• Microspectroscopy 

• Imaging spectroscopy 


[I] imaging 

[2] E. N. Lewis, E. Lee and L. H. Kidder, Combining 

Imaging and Spectroscopy: Solving Problems with Near-Infrared Chemical Imaging. Microscopy Today, Volume 12, No. 6, 11/2004. 
[3] C.L. Evans and X.S. Xie.2008. Coherent Anti-Stokes Raman Scattering Microscopy: Chemical Imaging for Biology and Medicine., 

doi:10.1146/annurev.anchem.l.031207. 112754 Annual Review of Analytical Chemistry, 1: 883-909. 
[4] Diaspro, A., and Robello, M. (1999). Multi-photon Excitation Microscopy to Study Biosystems. European Microscopy and Analysis., 5:5-7. 
[5] D.S. Mantus and G. H. Morrison. 1991. Chemical imaging in biology and medicine using ion microscopy., Microchimica Acta, 104, (1-6) 

January 1991, doi: 10.1007/BF01245536 
[6] Bagatolli, L.A., and Gratton, E. (2000). Two-photon fluorescence microscopy of coexisting lipid domains in giant unilamellar vesicles of 

binary phospholipid mixtures. Biophys J., 78:290-305. 
[7] Schwille, P., Haupts, U., Maiti, S., and Webb. W.(1999). Molecular dynamics in living cells observed by fluorescence correlation 

spectroscopy with one- and two-photon excitation. Biophysical Journal, 77(10):2251-2265. 
[8] l.Lee, S. C. et al., (2001). One Micrometer Resolution NMR Microscopy. J. Magn. Res., 150: 207-213. 
[9] Near Infrared Microspectroscopy, Fluorescence Microspectroscopy, Infrared Chemical Imaging and High Resolution Nuclear Magnetic 

Resonance Analysis of Soybean Seeds, Somatic Embryos and Single Cells., Baianu, I.C. et al. 2004., In Oil Extraction and Analysis. , D. 

Luthria, Editor pp.241-273, AOCS Press., Champaign, IL. 
[10] Single Cancer Cell Detection by Near Infrared Microspectroscopy, Infrared Chemical Imaging and Fluorescence Microspectroscopy. 2004.1. 

C. Baianu, D. Costescu, N. E. Hofmann and S. S. Korban, q-bio/0407006 (July 2004) ( 

[II] J. Dubois, G. Sando, E. N. Lewis, Near-Infrared Chemical Imaging, A Valuable Tool for the Pharmaceutical Industry, G.I.T. Laboratory 
Journal Europe, No. 1-2, 2007. 


[13] Raghavachari, R., Editor. 2001. Near-Infrared Applications in Biotechnology, Marcel-Dekker, New York, NY. 

[14] Applications of Novel Techniques to Health Foods, Medical and Agricultural Biotechnology. (June 2004) I. C. Baianu, P. R. Lozano, V. I. 

Prisecaru and H. C. Lin q-bio/0406047 ( 
[17] Eigen, M., and Rigler, R. (1994). Sorting single molecules: Applications to diagnostics and evolutionary biotechnology, Proc. Natl. Acad. 

Sci. USA 91:5740. 
[18] Rigler R. and Widengren J. (1990). Ultrasensitive detection of single molecules by fluorescence correlation spectroscopy, BioScience (Ed. 

Klinge & Owman) p. 180. 
[19] Single Cancer Cell Detection by Near Infrared Microspectroscopy, Infrared Chemical Imaging and Fluorescence Microspectroscopy. 2004.1. 

C. Baianu, D. Costescu, N. E. Hofmann, S. S. Korban and et al., q-bio/0407006 (July 2004) ( 
[20] Oehlenschlager F., Schwille P. and Eigen M. (1996). Detection of HIV-1 RNA by nucleic acid sequence-based amplification combined with 

fluorescence correlation spectroscopy, Proc. Natl. Acad. Sci. USA 93:1281. 
[21] Near Infrared Microspectroscopy, Fluorescence Microspectroscopy, Infrared Chemical Imaging and High Resolution Nuclear Magnetic 

Resonance Analysis of Soybean Seeds, Somatic Embryos and Single Cells., Baianu, I.C. et al. 2004., In Oil Extraction and Analysis.,!). 

Luthria, Editor pp.241-273, AOCS Press., Champaign, IL. 
[22] M. Chamberland, V. Farley, A. Vallieres, L. Belhumeur, A. Villemaire, J. Giroux et J. Legault, High-Performance Field-Portable Imaging 

Radiometric Spectrometer Technology For Hyperspectral imaging Applications, Proc. SPIE 5994, 59940N, September 2005. 
[23] Novel Techniques for Microspectroscopy and Chemical Imaging Analysis of Soybean Seeds and Embryos. (2002). Baianu, I.C., Costescu, 

D.M., and You, T. Soy2002 Conference, Urbana, Illinois. 
[24] Near Infrared Microspectroscopy, Chemical Imaging and NMR Analysis of Oil in Developing and Mutagenized Soybean Embryos in 

Culture. (2003). Baianu, I.C., Costescu, D.M., Hofmann, N., and Korban, S.S. AOCS Meeting, Analytical Division. 
[25] Near Infrared Microspectroscopy, Fluorescence Microspectroscopy, Infrared Chemical Imaging and High Resolution Nuclear Magnetic 

Resonance Analysis of Soybean Seeds, Somatic Embryos and Single Cells., Baianu, I.C. et al. 2004., In Oil Extraction and Analysis. , D. 

Luthria, Editor pp.241-273, AOCS Press., Champaign, IL. 

Chemical imaging 160 

Further reading 

1. E. N. Lewis, P. J. Treado, I. W. Levin, Near-Infrared and Raman Spectroscopic Imaging, American Laboratory, 

2. E. N. Lewis, P. J. Treado, R. C. Reeder, G. M. Story, A. E. Dowrey, C. Marcott, I. W. Levin, FTIR spectroscopic 
imaging using an infrared focal-plane array detector, Analytical Chemistry, 67:3377 (1995) 

3. P. Colarusso, L. H. Kidder, I. W. Levin, J. C. Fraser, E. N. Lewis Infrared Spectroscopic Imaging: from Planetary 
to Cellular Systems, Applied Spectroscopy, 52 (3):106A (1998) 

4. P. J. Treado I. W. Levin, E. N. Lewis, Near-Infrared Spectroscopic Imaging Microscopy of Biological Materials 
Using an Infrared Focal -Plane Array and an Acousto-Optic Tunable Filter (AOTF), Applied Spectroscopy, 48:5 

5. Hammond, S.V., Clarke, F. C, Near-infrared microspectroscopy. In: Handbook of Vibrational Spectroscopy, 
Vol. 2, J.M. Chalmers and P.R. Griffiths Eds. John Wiley and Sons, West Sussex, UK, 2002, p.1405-1418 

6. L.H. Kidder, A.S. Haka, E.N. Lewis, Instrumentation for FT-IR Imaging. In: Handbook of Vibrational 
Spectroscopy, Vol. 2, J.M. Chalmers and P.R. Griffiths Eds. John Wiley and Sons, West Sussex, UK, 2002, 
pp. 1386-1404 

7. J. Zhang; A. O'Connor; J. F. Turner II, Cosine Histogram Analysis for Spectral Image Data 
Classification,Applied Spectroscopy, Volume 58, Number 11, November 2004, pp. 1318-1324(7) 

8. J. F. Turner II; J. Zhang; A. O'Connor, A Spectral Identity Mapper for Chemical Image Analysis, Applied 
Spectroscopy, Volume 58, Number 11, November 2004, pp. 1308-1317(10) 

9. H. R. MORRIS, J. F. TURNER II, B. MUNRO, R. A. RYNTZ, P. J. TREADO, Chemical imaging of 
thermoplastic olefin (TPO) surface architecture, Langmuir, 1999, vol. 15, no8, pp. 2961-2972 

10. J. F. Turner II, Chemical imaging and spectroscopy using tunable filters: Instrumentation, methodology, and 
multivariate analysis, Thesis (PhD). UNIVERSITY OF PITTSBURGH, Source DAI-B 59/09, p. 4782, Mar 1999, 
286 pages. 

11. P. Schwille.(2001). in Fluorescence Correlation Spectroscopy. Theory and applications. R. Rigler & E.S. Elson, 
eds., p. 360. Springer Verlag: Berlin. 

12. Schwille P., Oehlenschlager F. and Walter N. (1996). Analysis of RNA-DNA hybridization kinetics by 
fluorescence correlation spectroscopy, Biochemistry 35:10182. 

13. FLIM I Fluorescence Lifetime Imaging Microscopy: Fluorescence, fluorophore chemical imaging, confocal 
emission microspectroscopy, FRET, cross-correlation fluorescence microspectroscopy (http://www. 
nikoninstruments . com/infocenter.php ?n=FLIM) . 

14. FLIM Applications: (http://www. nikoninstruments. com/infocenter.php ?n=FLIM) "FLIM is able to 
discriminate between fluorescence emanating from different fluorophores and autoflorescing molecules in a 
specimen, even if their emission spectra are similar. It is, therefore, ideal for identifying fluorophores in 
multi-label studies. FLIM can also be used to measure intracellular ion concentrations without extensive 
calibration procedures (for example, Calcium Green) and to obtain information about the local environment of a 
fluorophore based on changes in its lifetime." FLIM is also often used in microspectroscopic/chemical imaging, 
or microscopic, studies to monitor spatial and temporal protein-protein interactions, properties of membranes and 
interactions with nucleic acids in living cells. 

15. Gadella TW Jr., FRET and FLIM techniques, 33. Imprint: Elsevier, ISBN 978-0-08-054958-3. (2008) 560 pages 

16. Langel FD, et al., Multiple protein domains mediate interaction between BcllO and Maltl, /. Biol. Chem., 

17. Clayton AH. , The polarized AB plot for the frequency-domain analysis and representation of fluorophore 
rotation and resonance energy homotransfer. J Microscopy. (2008) 232(2):306-12 

18. Clayton AH, et al., Predominance of activated EGFR higher-order oligomers on the cell surface. Growth 
Factors (2008) 20:1 

Chemical imaging 161 

19. Plowman et al., Electrostatic Interactions Positively Regulate K-Ras Nanocluster Formation and Function. 
Molecular and Cellular Biology (2008) 4377-4385 

20. Belanis L, et al., Galectin-1 Is a Novel Structural Component and a Major Regulator of H-Ras Nanoclusters. 
Molecular Biology of the Cell (2008) 19:1404-1414 

21. Van Manen HJ, Refractive index sensing of green fluorescent proteins in living cells using fluorescence lifetime 
imaging microscopy. Biophys J. (2008) 94(8):L67-9 

22. Van der Krogt GNM, et al., A Comparison of Donor- Acceptor Pairs for Genetically Encoded FRET Sensors: 
Application to the Epac cAMP Sensor as an Example, PLoS ONE, (2008) 3(4):el916 

23. Dai X, et al., Fluorescence intensity and lifetime imaging of free and micellar-encapsulated doxorubicin in 
living cells. Nanomedicine. (2008) 4(l):49-56. 

External links 

• NIR Chemical Imaging in Pharmaceutical Industry ( 

• Pharmaceutical Process Analytical Technology: ( 

• NIR Chemical Imaging for Counterfeit Pharmaceutical Product Analysis ( 
spectroscopy /Near-IR+Spectroscopy/NIR-Chemical-Imaging-for-Counterfeit-Pharmaceutica/ArticleStandard/ 

• Chemical Imaging: Potential New Crime Busting Tool ( 

• Applications of Chemical Imaging in Research ( 

Hyperspectral imaging 

Hyperspectral imaging collects and processes information from across the electromagnetic spectrum. Unlike the 
human eye, which just sees visible light, hyperspectral imaging is more like the eyes of the mantis shrimp, which can 
see visible light as well as from the ultraviolet to infrared. Hyperspectral capabilities enable the mantis shrimp to 
recognize different types of coral, prey, or predators, all of which may appear as the same color to the human eye. 

Humans build sensors and processing systems to provide the same type of capability for application in agriculture, 
mineralogy, physics, and surveillance. Hyperspectral sensors look at objects using a vast portion of the 
electromagnetic spectrum. Certain objects leave unique 'fingerprints' across the electromagnetic spectrum. These 
'fingerprints' are known as spectral signatures and enable identification of the materials that make up a scanned 
object. For example, having the spectral signature for oil helps mineralogists find new oil fields. 

Hyperspectral imaging 


Acquisition and Analysis 

Hyperspectral sensors collect information as a set of 'images'. Each 
image represents a range of the electromagnetic spectrum and is also 
known as a spectral band. These 'images' are then combined and form a 
three dimensional hyperspectral cube for processing and analysis. 

Hyperspectral cubes are generated from airborne sensors like the 
NASA's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), or 
from satellites like NASA's Hyperion. However, for many 
development and validation studies handheld sensors are used. 

The precision of these sensors is typically measured in spectral 

resolution, which is the width of each band of the spectrum that is 

captured. If the scanner picks up on a large number of fairly narrow 

frequency bands, it is possible to identify objects even if said objects 

are only captured in a handful of pixels. However, spatial resolution is 

a factor in addition to spectral resolution. If the pixels are too large, then multiple objects are captured in the same 

pixel and become difficult to identify. If the pixels are too small, then the energy captured by each sensor-cell is low, 

and the decreased signal-to-noise ratio reduces the reliability of measured features. 

MicroMSI, Opticks and Envi are three remote sensing applications that support the processing and analysis of 
hyperspectral data. The acquisition and processing of hyperspectral images is also referred to as imaging 

Example of a hyperspectral cube 

Differences between hyperspectral and multispectral imaging 


Hyperspectral imaging is part of a class of techniques commonly 
referred to as spectral imaging or spectral analysis. Hyperspectral 
imaging is related to multispectral imaging. The distinction between 
hyper- and multi-spectral should not be based on a random or arbitrary 
"number of bands". A distinction that is based on the type of 
measurement may be more appropriate. 

Multispectral deals with several images at discrete and somewhat 
narrow bands. The "discrete and somewhat narrow" is what 
distinguishes multispectral in the visible from color photography. A 
multispectral sensor may have many bands covering the spectrum from 
the visible to the longwave infrared. Multispectral images do not 
produce the "spectrum" of an object. Landsat is an excellent example. 

Hyperspectral deals with imaging narrow spectral bands over a 

contiguous spectral range, and produce the spectra of all pixels in the scene. So a sensor with only 20 bands can also 
be hyperspectral when it covers the range from 500 to 700 nm with 20 10-nm wide bands. (While a sensor with 20 
discrete bands covering the VIS, NIR, SWIR, MWIR, and LWIR would be considered multispectral.) 

Ultraspectral could be reserved for interferometer type imaging sensors with a very fine spectral resolution. These 
sensor often have (but not necessarily) a low spatial resolution of several pixels only, a restriction imposed by the 
high data rate. 

Hyperspectral and Multispectral Differences. 

Hyperspectral imaging 163 


Hyperspectral remote sensing is used in a wide array of real-life applications. Although originally developed for 
mining and geology (the ability of hyperspectral imaging to identify various minerals makes it ideal for the mining 
and oil industries, where it can be used to look for ore and oil ) it has now spread into fields as widespread as 

ecology and surveillance, as well as historical manuscript research such as the imaging of the Archimedes 
Palimpsest. This technology is continually becoming more available to the public, and has been used in a wide 
variety of ways. Organizations such as NASA and the USGS have catalogues of various minerals and their spectral 
signatures, and have posted them online to make them readily available for researchers. 


Although the costs of acquiring hyperspectral images is typically high, for specific crops and in specific climates 
hyperspectral remote sensing is used more and more for monitoring the development and health of crops. In 
Australia work is under way to use imaging spectrometers to detect grape variety, and develop an early warning 
system for disease outbreaks. Furthermore work is underway to use hyperspectral data to detect the chemical 
composition of plants which can be used to detect the nutrient and water status of wheat in irrigated systems . 

Another important area in agriculture is the detection of animal proteins in compound feeds in order to avoid the 
Bovine spongiform encephalopathy (BSE) or mad-cow disease (MCD). For this, different studies have been done in 
order to propose alternative tools to the reference method (classical microscopy). One of the first alternatives is the 
use of NIR microscopy (Infrared microscopy), which combines the advantages of microscopy and NIR. In 2004, the 
first study relating this problematic with Hyperspectral imaging was published . Hyperspectral libraries are 
constructed, which are representative of the wide diversity of ingredients usually present in the preparation of 
compound feeds. These libraries can be used together with chemometric tools to investigate the limit of detection, 
specificity and reproducibility of the NIR hyperspectral imaging method for the detection and quantification of 
animal ingredient in feed. 


The original field of development for hyperspectral remote sensing, hyperspectral sensing of minerals is now well 
developed. Many minerals can be identified from images, and their relation to the presence of valuable minerals such 
as gold and diamonds is well understood. Currently the move is towards understanding the relation between oil and 
gas leakages from pipelines and natural wells; their effect on the vegetation and the spectral signatures. Recent work 
includes the PhD dissertations of Werff and Noomen . 


Physicists use an electron microscopy technique that involves microanalysis using either Energy dispersive X-ray 
spectroscopy (EDS), Electron energy loss spectroscopy (EELS), Infrared Spectroscopy(IR), Raman Spectroscopy, or 
cathodoluminescence (CL) spectroscopy, in which the entire spectrum measured at each point is recorded. EELS 
hyperspectral imaging is performed in a scanning transmission electron microscope (STEM); EDS and CL mapping 
can be performed in STEM as well, or in a scanning electron microscope or electron probe microanalyzer (EPMA). 
Often, multiple techniques (EDS, EELS, CL) are used simultaneously. 

In a "normal" mapping experiment, an image of the sample will be made that is simply the intensity of a particular 
emission mapped in an XY raster. For example, an EDS map could be made of a steel sample, in which iron x-ray 
intensity is used for the intensity grayscale of the image. Dark areas in the image would indicate not-iron-bearing 
impurities. This could potentially give misleading results; if the steel contained tungsten inclusions, for example, the 
high atomic number of tungsten could result in bremsstrahlung radiation that made the iron-free areas appear to be 
rich in iron. 

Hyperspectral imaging 164 

By hyperspectral mapping, instead, the entire spectrum at each mapping point is acquired, and a quantitative analysis 
can be performed by computer post-processing of the data, and a quantitative map of iron content produced. This 
would show which areas contained no iron, despite the anomalous x-ray counts caused by bremsstrahlung. Because 
EELS core-loss edges are small signals on top of a large background, hyperspectral imaging allows large 
improvements to the quality of EELS chemical maps. 

Similarly, in CL mapping, small shifts in the peak emission energy could be mapped, which would give information 
regarding slight chemical composition changes or changes in the stress state of a sample. 


Hyperspectral surveillance is the implementation of hyperspectral scanning technology for surveillance purposes. 
Hyperspectral imaging is particularly useful in military surveillance because of measures that military entities now 
take to avoid airborne surveillance. Airborne surveillance has been in effect since soldiers used tethered balloons to 
spy on troops during the American Civil War, and since that time we have learned not only to hide from the naked 
eye, but to mask our heat signature to blend in to the surroundings and avoid infrared scanning, as well. The idea that 
drives hyperspectral surveillance is that hyperspectral scanning draws information from such a large portion of the 
light spectrum that any given object should have a unique spectral signature in at least a few of the many bands that 
get scanned. 

Advantages and disadvantages 

The primary advantages to hyperspectral imaging is that, because an entire spectrum is acquired at each point, the 
operator needs no prior knowledge of the sample, and post-processing allows all available information from the 
dataset to be mined. 

The primary disadvantages are cost and complexity. Fast computers, sensitive detectors, and large data storage 
capacities are needed for analyzing hyperspectral data. Significant data storage capacity is necessary since 
hyperspectral cubes are large multi-dimensional datasets, potentially exceeding hundreds of megabytes. All of these 
factors greatly increase the cost of acquiring and processing hyperspectral data. Also, one of the hurdles that 
researchers have had to face is finding ways to program hyperspectral satellites to sort through data on their own and 
transmit only the most important images, as both transmission and storage of that much data could prove difficult 
and costly. As a relatively new analytical technique, the full potential of hyperspectral imaging has not yet been 

See also 

Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance 

Full Spectral Imaging 

Multi-spectral image 

Chemical imaging 

Remote Sensing 

Sensor fusion 


Liquid Crystal Tunable Filter 

Hyperspectral imaging 165 


[1] Schurmer, J.H., (Dec 2003), Air Force Research Laboratories Technology Horizons 

[2] Ellis, J., (Jan 2001) Searching for oil seeps and oil-impacted soil with hyperspectral imagery ( 

currentissues/JanOl/ellis.htm), Earth Observation Magazine. 
[3] Smith, R.B. (July 14, 2006), Introduction to hyperspectral imaging with TMIPS ( 

pdf), Microimages Tutorial Web site 
[4] Lacar, F.M., et al., Use of hyperspectral imagery for mapping grape varieties in the Barossa Valley, South Australia ( 

2440/39292), Geoscience and remote sensing symposium (IGARSS'01) - IEEE 2001 International, vol.6 2875-2877p. 

[5] Ferwerda, J.G. (2005), Charting the quality' of forage: measuring and mapping the variation of chemical components in foliage with 

hyperspectral remote sensing (, Wageningen University , ITC Dissertation 126, 

166p. ISBN 90-8504-209-7 
[6] Tilling, A.K., et al., (2006) Remote sensing to detect nitrogen and water stress in wheat ( 

plenary/technology/4584_tillingak.htm), The Australian Society of Agronomy 
[7] Fernandez Pierna, J. A., et al., 'Combination of Support Vector Machines (SVM) and Near Infrared (NIR) imaging spectroscopy for the 

detection of meat and bone meat (MBM) in compound feeds' Journal of Chemometrics 18 (2004) 341-349 
[8] Werff H. (2006), Knowledge based remote sensing of complex objects: recognition of spectral and spatial patterns resulting from natural 

hydrocarbon seepages (, Utrecht University, ITC Dissertation 131, 138p. ISBN 

[9] Noomen, M.F. (2007), Hyperspectral reflectance of vegetation affected by underground hydrocarbon gas seepage ( 

library/papers_2007/phd/noomen.pdf), Enschede, ITC 15 lp. ISBN 978-90-8504-671-4. 

External links 

• SpecTIR ( - Hyperspectral solutions and end to end global data collection & analysis 

• Opticks ( - open source, remote sensing application and development framework. 

• ITT Visual Information Solutions - ENVI Hyperspectral Image Processing Software ( 

• A Hyperspectral Imaging Prototype ( Fourier transform 
spectroscopy is combined with Fabry-Perot interferometry 

• Middleton Research ( Hyperspectral Imaging products, custom engineering 

• Photon etc. (http://photonetc. com/index. php?lan=en&sec=300&subl=3000&sub2=1023) Hyperspectral 
Imaging Systems 

• UmBio - Evince. Hyperspectral image analysis in real-time. Visual information solutions, see industrial demo 
movies ( files/Products/Evince Image/Evincelmage.aspx) 

• A Matlab Hyperspectral Toolbox ( 

• Telops Hyper-Cam ( Commercial infrared hyperspectral camera 

Multi-spectral image 166 

Multi-spectral image 

A Multi-spectral image is one that captures image data at specific frequencies across the electromagnetic spectrum. 
The wavelengths may be separated by filters or by the use of instruments that are sensitive to particular wavelengths, 
including light from frequencies beyond the visible light range, such as infrared. Multi-spectral imaging can allow 
extraction of additional information that the human eye fails to capture with its receptors for red, green and blue. It 
was originally developed for space-based imaging. 

Multi-spectral images are the main type of images acquired by Remote sensing (RS) radiometers. Dividing the 
spectrum into many bands, multi-spectral is the opposite of panchromatic which records only the total intensity of 
radiation falling on each pixel. Usually satellites have 3 to 7 or more radiometers (Landsat has 7). Each one acquires 
one digital image (in remote sensing, called a scene) in a small band of visible spectra, ranging 0.7 pm to 0.4 |jm, 
called red-green-blue (RGB) region, and going to infra-red wavelengths of 0.7 |am to 10 or more pm, classified as 
NIR-Near InfraRed, MIR-Middle InfraRed and FIR-Far InfraRed or Thermal. In the Landsat case the 7 scenes 
comprise a 7 band multi spectral image. Multispectral images with more numerous bands or finer spectral resolution 
or wider spectral coverage may be called "hyperspectral" or "ultra-spectral". 

This technology has also assisted in the interpretation of ancient papyri such as those found at Herculaneum, by 
imaging the fragments in the infrared range (lOOOnm). Often the text on the documents appears to be as black ink on 
black paper to the naked eye. At lOOOnm, the difference in light reflectivity makes the text clearly readable. It has 
also been used to image the Archimedes Palimpsest by imaging the parchment leaves in bandwidths from 365-870 
nm and then using advanced digital image processing techniques to reveal the under text of Archimedes work. 

The availability of wavelengths for remote sensing and imaging is limited by infrared window and optical window. 

Spectral bands 

The wavelengths are approximate; exact values depend on the particular satellite's instruments: 

• Blue, 450-515. .520 nm, used for atmospheric and deep water imaging. Can reach within 150 feet (46 m) deep in 
clear water. 

• Green, 515. .520-590. .600 nm, used for imaging of vegetation and deep water structures, up to 90 feet (27 m) in 
clear water. 

• Red, 600.. 630-680.. 690 nm, used for imaging of man-made objects, water up to 30 feet (9.1 m) deep, soil, and 

• Near infrared, 750-900 nm, primarily for imaging of vegetation. 

• Mid-infrared, 1550-1750 nm, for imaging vegetation and soil moisture content, and some forest fires. 

• Mid-infrared, 2080-2350 nm, for imaging soil, moisture, geological features, silicates, clays, and fires. 

• Thermal infrared, 10400-12500 nm, uses emitted radiation instead of reflected, for imaging of geological 
structures, thermal differences in water currents, fires, and for night studies. 

• Radar and related technologies, useful for mapping terrain and for detecting various objects. 

Multi-spectral image 167 

Spectral band usage 

For different purposes, different combinations of spectral bands can be used. They are usually represented with red, 
green, and blue channels. Mapping of bands to colors depends on the purpose of the image and the personal 
preferences of the analysts. Thermal infrared is often omitted from consideration due to poor spatial resolution, 
except for special purposes. 

• True-color. Uses only red, green, and blue channels, mapped to their respective colors. A plain color photograph. 
Good for analyzing man-made objects. Easy to understand for beginner analysts. 

• Green-red-infrared, where blue channel is replaced with near infrared. Vegetation, highly reflective in near IR, 
then shows as blue. This combination is often used for detection of vegetation and camouflage. 

• Blue-nearlR-midIR, where blue channel uses visible blue, green uses near-infrared (so vegetation stays green), 
and mid-infrared is shown as red. Such images allow seeing the water depth, vegetation coverage, soil moisture 
content, and presence of fires, all in a single image. 

Many other combinations are in use. Near infrared is often shown as red, making vegetation covered areas appear 

Multispectral Data Analysis Software 

• MicroMSI endorsed by the NGA. 

• Opticks - an open source remote sensing application. 

See also 

Hyperspectral imaging 
Full Spectral Imaging 
Remote sensing 
Spy satellite 
Imaging spectroscopy 
Imaging spectrometer 
Liquid Crystal Tunable Filter 
Satellite imagery 


• Harold Hough: Satellite Surveillance, Loompanics Unlimited, 1991, ISBN 1-55950-077-8 

Fluorescence Imaging 


Fluorescence Imaging 

A fluorescence microscope (colloquially synonymous with 
epifluorescence microscope) is an optical microscope used to study 
properties of organic or inorganic substances using the phenomena of 
fluorescence and phosphorescence instead of, or in addition to, 

reflection and absorption 

[1] [2] 

An upright fluorescence microscope (Olympus 

BX6 1 ) with the fluorescent filter cube turret 

above the objective lenses, coupled with a digital 



In most cases, a component of interest in the specimen can be labeled 
specifically with a fluorescent molecule called a fluorophore (such as 
green fluorescent protein (GFP), fluorescein or DyLight 488). The 
specimen is illuminated with light of a specific wavelength (or 
wavelengths) which is absorbed by the fluorophores, causing them to 
emit light of longer wavelengths (i.e. of a different color than the 
absorbed light). The illumination light is separated from the much 
weaker emitted fluorescence through the use of a spectral emission 
filter. Typical components of a fluorescence microscope are the light 
source (xenon arc lamp or mercury-vapor lamp), the excitation filter, 
the dichroic mirror (or dichromatic beamsplitter), and the emission 
filter (see figure below). The filters and the dichroic are chosen to 
match the spectral excitation and emission characteristics of the 
fluorophore used to label the specimen. In this manner, the 
distribution of a single fluorophore (color) is imaged at a time. 
Multi-color images of several types of fluorophores must be composed 
by combining several single-color images 


An inverted fluorescence microscope (Nikon 

TE2000) with the fluorescent filter cube turret 

below the stage. Note the orange plate that allows 

the user to look at a sample while protecting their 

eyes from the UV light. 

Most fluorescence microscopes in use are epifluorescence microscopes 

(i.e. excitation and observation of the fluorescence are from above (epi—) the specimen). These microscopes have 
become an important part in the field of biology, opening the doors for more advanced microscope designs, such as 
the confocal microscope and the total internal reflection fluorescence microscope (TIRF). 

Fluorescence Imaging 


Fluorophores lose their ability to fluoresce as they are illuminated in a process called photobleaching. Special care 
must be taken to prevent photobleaching through the use of more robust fluorophores, by minimizing illumination, 
or by introducing a scavenger system to reduce the rate of photobleaching. 

Epifluorescence microscopy 

dichroic m irror /^^^ 



emission filter 

light source 

excitation filter 


Schematic of a fluorescence microscope. 

Epifluorescence microscopy is a method of fluorescence microscopy 
that is widely used in life sciences. The excitatory light is passed from 
above (or, for inverted microscopes, from below), through the 
objective lens and then onto the specimen instead of passing it first 
through the specimen. The fluorescence in the specimen gives rise to 
emitted light which is focused to the detector by the same objective 
that is used for the excitation. Since most of the excitatory light is 
transmitted through the specimen, only reflected excitatory light 
reaches the objective together with the emitted light and this method 
therefore gives an improved signal to noise ratio. An additional filter 
between the objective and the detector can filter out the remaining 
excitation light from fluorescent light. A common use in biology is to 
apply fluorescent or fluorochrome stains to the specimen in order to 
image distributions of proteins or other molecules of interest. 

Improvements and sub-diffraction techniques 

The nature of light limits the size of the spot to which light can be focused. According to the diffraction limit a 
focused light distribution cannot be made smaller than approximately half of the wavelength of the used light. 
Uncovered in the 19th century by Ernst Abbe this has been a barrier of the achievable resolution of fluorescence 
light microscopes for a long time. While resolution is denoted by the ability to discern different objects of the same 
kind, localizing or tracking of single particles have been performed with a precision much below the diffraction 

Several improvements in microscopy techniques have been invented in the 20th century and have resulted in 
increased resolution and contrast to some extent. However they did not overcome the diffraction limit. In 1978 first 
theoretical ideas have been developed to break this barrier by using a 4Pi microscope as a confocal laser scanning 
fluorescence microscope where the light is focused ideally from all sides to a common focus which is used to scan 


the object by 'point-by-point' excitation combined with 'point-by-point' detection . However, the first experimental 
demonstration of the 4pi microscope took place in 1994 . The 4Pi microscopy is maximizing the amount of 
available focusing directions by using two opposing objective lenses or Multi-photon microscopy using redshifted 
light and multi-photon excitation. 

The first technique to really achieve a sub-diffraction resolution was STED microscopy, proposed in 1994. This 
method and all techniques following the RESOLFT concept rely on a strong non-linear interaction between light and 
fluorescing molecules. The molecules are driven strongly between distinguishable molecular states at each specific 
location, so that finally light can be emitted at only a small fraction of space, hence an increased resolution. 

As well in the 1990ies another super resolution microscopy method based on wide field microscopy has been 
developed. Substantially improved size resolution of cellular nanostructures stained with a fluorescent marker was 
achieved by development of SPDM localization microscopy and the structured laser illumination (spatially 
modulated illumination, SMI) . Combining the principle of SPDM with SMI resulted in the development of the 

Vertico SMI microscope 

[6] [7] 

Single molecule detection of normal blinking fluorescent dyes like GFP can be 

achieved by using a further development of SPDM the so-called SPDMphymod technology which makes it possible 

Fluorescence Imaging 


to detect and count two different fluorescent molecule types at the molecular level (this technology is referred to as 


2CLM, 2 Color Localization Microscopy) 

Alternatively, the advent of photoactivated localization microscopy could achieve similar results by relying on 
blinking or switching of single molecules, where the fraction of fluorescing molecules is very small at each time. 
This stochastic response of molecules on the applied light corresponds also to a highly nonlinear interaction, leading 
to subdiffraction resolution. 



Epifluorescent imaging of the 

three components in a dividing 

human cancer cell. DNA is 

stained blue, a protein called 

INCENP is green, and the 

microtubules are red. Each 

fluorophore is imaged separately 

using a different combination of 

excitation and emission filters, 

and the images are captured 

sequentially using a digital CCD 

camera, then overlaid to give a 

complete image. 

Endothelial cells under the 

microscope. Nuclei are stained 

blue with DAPI, microtubules are 

marked green by an antibody 

bound to FITC and actin filaments 

are labeled red with phalloidin 

bound to TRITC. Bovine 

pulmonary artery endothelial 

(BPAE) cells 

Human lymphocyte nucleus stained with 
DAPI with chromosome 13 (green) and 21 

(red) centromere probes hybridized 
(Fluorescent in situ hybridization (FISH)) 

Yeast cell membrane visualized 

by some membrane proteins 

fused with RFP and GFP 

fluorescent markers. Imposition 

of light from both of markers 

results in yellow color. 





15nm + 

+ *l4nrr 
— + 








5uper Resolution 
roscopy: Single ' 
ecule detection 
n cancer cell. T> 
nee measuremen 
15 nm range (5 
andard deviatior 
measured with a 


n a 
ts in 



Super Resolution Microscopy: Co-localzation 

microscopy (2CLM) with GFP and RFP 

fusion proteins (nucleus of a bone cancer cell) 

120.000 localized molecules in a wide-field 
area (470 urn") measured with a 

Vertico-SMI/SPDMphymod micrsocpe 

Fluorescence Imaging 


# "> ■ 




Fluorescence microscopy of 
DNA Expression in the 
Human Wild-Type and 
P239S Mutant Palladin. 

Fluorescence microscopy images of sun flares 

pathology in a blood cell showing the affected 

areas in red. 

See also 

• Microscope 

• Mercury-vapor lamp, Xenon arc lamp 

• Stokes shift 


[1] Spring KR, Davidson MW. "Introduction to Fluorescence Microscopy" ( 

fluorescenceintro.html). Nikon MicroscopyU . . Retrieved 2008-09-28. 
[2] "The Fluorescence Microscope" ( Microscopes — Help 

Scientists Explore Hidden Worlds. The Nobel Foundation. . Retrieved 2008-09-28. 
[3] Considerations on a laser-scanning-microscope with high resolution and depth of field: C. Cremer and T. Cremer in M1CROSCOPICA 

ACTA VOL. 81 NUMBER 1 September.pp. 31—44 (1978) 
[4] S.W. Hell, E.H.K. Stelzer, S. Lindek, C. Cremer (1994). "Confocal microscopy with an increased detection aperture: type-B 4Pi confocal 

microscopy" ( Optics Letters 19: 222—224. 

doi:10.1364/OL. 19.000222. . 
[5] M. Hausmann, B. Schneider, J. Bradl, C. Cremer (1997): High-precision distance microscopy of 3D-nanostructures by a spatially modulated 

excitation fluorescence microscope. In: Optical Biopsies and Microscopic Techniques II (Edts Bigio IJ, Schneckenburger H, Slavik J, 

Svanberg K, Viallet PM), Proc. SPIE 3197: 217-222 
[6] High precision structural analysis of subnuclear complexes in fixed and live cells via Spatially Modulated Illumination (SMI) microscopy: J. 

Reymann, D. Baddeley, P. Lemmer, W. Stadter, T. Jegou, K. Rippe, C. Cremer, U. Birk in CHROMOSOME RESEARCH, Vol. 16, pp. 367 

-382 (2008) 
[7] Nano-structure analysis using Spatially Modulated Illumination microscopy: D. Baddeley, C. Batram, Y. Weiland, C. Cremer, U.J. Birk in 

NATURE PROTOCOLS, Vol 2, pp. 2640 - 2646 (2007) 
[8] Manuel Gunkel, Fabian Erdel, Karsten Rippe, Paul Lemmer, Rainer Kaufmann, Christoph Hormann, Roman Amberger and Christoph 

Cremer: Dual color localization microscopy of cellular nanostructures. In: Biotechnology Journal, 2009, 4, 927-938. ISSN 1860-6768 

External links 

• ( - Database of fluorescent dyes. 

Fluorescence correlation spectroscopy 172 

Fluorescence correlation spectroscopy 

Fluorescence correlation spectroscopy (FCS) is a correlation analysis of fluctuation of the fluorescence intensity. 
The analysis provides parameters of the physics under the fluctuations. One of the interesting applications of this is 
an analysis of the concentration fluctuations of fluorescent particles (molecules) in solution. In this application, the 
fluorescence emitted from a very tiny space in solution containing a small number of fluorescent particles 
(molecules) is observed. The fluorescence intensity is fluctuating due to Brownian motion of the particles. In other 
words, the number of the particles in the sub-space defined by the optical system is randomly changing around the 
average number. The analysis gives the average number of fluorescent particles and average diffusion time, when the 
particle is passing through the space. Eventually, both the concentration and size of the particle (molecule) are 
determined. Since the method is observing a small number of molecule in a very tiny spot, it is a very sensitive 
analytical tool. Both parameters are very important and essential in the biochemical research, biophysics and 
chemistry. In contrast to the other method, such as HPLC analysis, this method has no physical separation process 
and has a good spatial resolution determined by the optics. These are of great advantage. Moreover, the method 
enables us to observe fluorescence-tagged molecules in the biochemical pathway in the intact living cells. Then, it 
opens a new area, "in situ or in vivo biochemistry", tracing biochemical pathway in the intact cells and organs. 

Commonly, FCS is employed in the context of optical microscopy, in particular confocal or two-photon microscopy. 
In these techniques light is focused on a sample and the measured fluorescence intensity fluctuations (due to 
diffusion, physical or chemical reactions, aggregation, etc.) are analyzed using the temporal autocorrelation. Because 
the measured property is essentially related to the magnitude and/or the amount of fluctuations, there is an optimum 
measurement regime at the level when individual species enter or exit the observation volume (or turn on and off in 
the volume). When too many entities are measured at the same time the overall fluctuations are small in comparison 
to the total signal and may not be resolvable — in the other direction, if the individual fluctuation-events are too 
sparse in time, one measurement may take prohibitively too long. FCS is in a way the fluorescent counterpart to 
dynamic light scattering, which uses coherent light scattering, instead of (incoherent) fluorescence. 

When an appropriate model is known, FCS can be used to obtain quantitative information such as 

• diffusion coefficients 

• hydrodynamic radii 

• average concentrations 

• kinetic chemical reaction rates 

• singlet-triplet dynamics 

Because fluorescent markers come in a variety of colors and can be specifically bound to a particular molecule (e.g. 
proteins, polymers, metal-complexes, etc.), it is possible to study the behavior of individual molecules (in rapid 
succession in composite solutions). With the development of sensitive detectors such as avalanche photodiodes the 
detection of the fluorescence signal coming from individual molecules in highly dilute samples has become practical. 
With this emerged the possibility to conduct FCS experiments in a wide variety of specimens, ranging from 
materials science to biology. The advent of engineered cells with genetically tagged proteins (like green fluorescent 
protein) has made FCS a common tool for studying molecular dynamics in living cells. 

Fluorescence correlation spectroscopy 173 


Signal-correlation techniques were first experimentally applied to fluorescence in 1972 by Magde, Elson, and 
Webb , who are therefore commonly credited as the "inventors" of FCS. The technique was further developed in a 
group of papers by these and other authors soon after, establishing the theoretical foundations and types of 
applications. See Thompson (1991) for a review of that period. 

Beginning in 1993 , a number of improvements in the measurement techniques — notably using confocal 

microscopy, and then two-photon microscopy — to better define the measurement volume and reject 

T71 rsi 
background — greatly improved the signal-to-noise ratio and allowed single molecule sensitivity. Since then, 

there has been a renewed interest in FCS, and as of August 2007 there have been over 3,000 papers using FCS found 

in Web of Science. See Krichevsky and Bonnet for a recent review. In addition, there has been a flurry of activity 

extending FCS in various ways, for instance to laser scanning and spinning-disk confocal microscopy (from a 

stationary, single point measurement), in using cross-correlation (FCCS) between two fluorescent channels instead 

of autocorrelation, and in using Forster Resonance Energy Transfer (FRET) instead of fluorescence. 

Typical FCS setup 

The typical FCS setup consists of a laser line (wavelengths ranging typically from 405—633 nm (cw), and from 
690—1100 nm (pulsed)), which is reflected into a microscope objective by a dichroic mirror. The laser beam is 
focused in the sample, which contains fluorescent particles (molecules) in such high dilution, that only a few are 
within the focal spot (usually 1—100 molecules in one fL). When the particles cross the focal volume, they fluoresce. 
This light is collected by the same objective and, because it is red-shifted with respect to the excitation light it passes 
the dichroic mirror reaching a detector, typically a photomultiplier tube or avalanche photodiode detector. The 
resulting electronic signal can be stored either directly as an intensity versus time trace to be analyzed at a later point, 
or computed to generate the autocorrelation directly (which requires special acquisition cards). The FCS curve by 
itself only represents a time-spectrum. Conclusions on physical phenomena have to be extracted from there with 
appropriate models. The parameters of interest are found after fitting the autocorrelation curve to modeled functional 

The measurement volume 

The measurement volume is a convolution of illumination (excitation) and detection geometries, which result from 
the optical elements involved. The resulting volume is described mathematically by the point spread function (or 
PSF), it is essentially the image of a point source. The PSF is often described as an ellipsoid (with unsharp 
boundaries) of few hundred nanometers in focus diameter, and almost one micrometre along the optical axis. The 
shape varies significantly (and has a large impact on the resulting FCS curves) depending on the quality of the 
optical elements (it is crucial to avoid astigmatism and to check the real shape of the PSF on the instrument). In the 
case of confocal microscopy, and for small pinholes (around one Airy unit), the PSF is well approximated by 

PSF(r, z) = I oe - 2r2 /< e - 2z2 /^ 
where / is the peak intensity, r and z are radial and axial position, and ^-^and UJ z are the radial and axial radii, 
and UJ Z > UJ x y . This Gaussian form is assumed in deriving the functional form of the autocorrelation. 
Typically <^> X yis 200—300 nm, and U) z is 2—6 times larger. One common way of calibrating the measurement 
volume parameters is to perform FCS on a species with known diffusion coefficient and concentration (see below). 
Diffusion coefficients for common fluorophores in water are given in a later section. 
The Gaussian approximation works to varying degrees depending on the optical details, and corrections can 


sometimes be applied to offset the errors in approximation. 

Fluorescence correlation spectroscopy 174 

Autocorrelation function 

The (temporal) autocorrelation function is the correlation of a time series with itself shifted by time t, as a function 
of r: 

_ (SI(t)6I(t + T)) (I(t)I(t + T)) _ 

[ } <w m) 2 

where 5I(t) = lit) — (I(t)) is the deviation from the mean intensity. The normalization (denominator) here is 
the most commonly used for FCS, because then the correlation at 7- = 0, G(0), is related to the average number of 
particles in the measurement volume. 

Interpreting the autocorrelation function 

To extract quantities of interest, the autocorrelation data can be fitted, typically using a nonlinear least squares 
algorithm. The fit's functional form depends on the type of dynamics (and the optical geometry in question). 

Normal diffusion 

The fluorescent particles used in FCS are small and thus experience thermal motions in solution. The simplest FCS 
experiment is thus normal 3D diffusion, for which the autocorrelation is: 

«M = c (»)(TT(^IiW^P + G(oo) 

where a = CV z /uj x yis the ratio of axial to radial e ~ 2 radii of the measurement volume, and Tf>is the 

characteristic residence time. This form was derived assuming a Gaussian measurement volume. Typically, the fit 
would have three free parameters— G(0), G(oo), and Tjy— from which the diffusion coefficient and fluorophore 

concentration can be obtained. 

With the normalization used in the previous section, G(0) gives the mean number of diffusers in the volume <N>, or 

equivalently — with knowledge of the observation volume size — the mean concentration: 

G{0) = W) = ^wy 

where the effective volume is found from integrating the Gaussian form of the measurement volume and is given by: 

v eS = ^W xy . z . 

T£> gives the diffusion coefficient: 

D = lo 2 JAt d . 

Anomalous diffusion 

If the diffusing particles are hindered by obstacles or pushed by a force (molecular motors, flow, etc.) the dynamics 
is often not sufficiently well-described by the normal diffusion model, where the mean squared displacement (MSD) 
grows linearly with time. Instead the diffusion may be better described as anomalous diffusion, where the temporal 
dependenc of the MSD is non-linear as in the power-law: 

MSD = 6D a t a 

where D a is an anomalous diffusion coefficient. "Anomalous diffusion" commonly refers only to this very generic 
model, and not the many other possibilities that might be described as anomalous. Also, a power law is, in a strict 
sense, the expected form only for a narrow range of rigorously defined systems, for instance when the distribution of 
obstacles is fractal. Nonetheless a power law can be a useful approximation for a wider range of systems. 
The FCS autocorrelation function for anomalous diffusion is: 

Fluorescence correlation spectroscopy 175 

G(t) = G(0)- , . — r- in —. — — — - + G(oo), 

(1 + (r/r D ) a )(l + a- 2 (r/r D )«)V2 

where the anomalous exponent a is the same as above, and becomes a free parameter in the fitting. 

Using FCS, the anomalous exponent has been shown to be an indication of the degree of molecular crowding (it is 


less than one and smaller for greater degrees of crowding) 

Polydisperse diffusion 

If there are diffusing particles with different sizes (diffusion coefficients), it is common to fit to a function that is the 
sum of single component forms: 

g(t) = g(o) J2 K^r-/ — wi + -v i — iw* + G(oo) 

i (1 + [T/T Dt i)){l + a 2 (r/r Ai )) 1/2 
where the sum is over the number different sizes of particle, indexed by i, and Ctj gives the weighting, which is 
related to the quantum yield and concentration of each type. This introduces new parameters, which makes the fitting 
more difficult as a higher dimensional space must be searched. Nonlinear least square fitting typically becomes 
unstable with even a small number of T~D,i s. A more robust fitting scheme, especially useful for polydisperse 


samples, is the Maximum Entropy Method 

Diffusion with flow 

With diffusion together with a uniform flow with velocity v in the lateral direction, the autocorrelation is : 
where t v = u> xy /vis the average residence time if there is only a flow (no diffusion). 

Chemical relaxation 

A wide range of possible FCS experiments involve chemical reactions that continually fluctuate from equilibrium 
because of thermal motions (and then "relax"). In contrast to diffusion, which is also a relaxation process, the 
fluctuations cause changes between states of different energies. One very simple system showing chemical relaxation 
would be a stationary binding site in the measurement volume, where particles only produce signal when bound (e.g. 
by FRET, or if the diffusion time is much faster than the sampling interval). In this case the autocorrelation is: 

G(t) = G(0) exp(-T/V B ) + C7(oo) 

TB = (feon + feoff)" 1 

is the relaxation time and depends on the reaction kinetics (on and off rates), and: 

is related to the equilibrium constant K. 

Most systems with chemical relaxation also show measureable diffusion as well, and the autocorrelation function 
will depend on the details of the system. If the diffusion and chemical reaction are decoupled, the combined 
autocorrelation is the product of the chemical and diffusive autocorrelations. 

Triplet state correction 

The autocorrelations above assume that the fluctuations are not due to changes in the fluorescent properties of the 
particles. However, for the majority of (bio)organic fluorophores— e.g. green fluorescent protein, rhodamine, Cy3 and 
Alexa Fluor dyes— some fraction of illuminated particles are excited to a triplet state (or other non-radiative decaying 
states) and then do not emit photons for a characteristic relaxation time Tp. Typically Tpis on the order of 
microseconds, which is usually smaller than the dynamics of interest (e.g. Tjj) but large enough to be measured. A 

Fluorescence correlation spectroscopy 


multiplicative term is added to the autocorrelation account for the triplet state. For normal diffusion: 

where ^is the fraction of particles that have entered the triplet state and Tpis the corresponding triplet state 
relaxation time. If the dynamics of interest are much slower than the triplet state relaxation, the short time 
component of the autocorrelation can simply be truncated and the triplet term is unnecessary. 

Common fluorescent probes 

The fluorescent species used in FCS is typically a biomolecule of interest that has been tagged with a fluorophore 
(using immunohistochemistry for instance), or is a naked fluorophore that is used to probe some environment of 
interest (e.g. the cytoskeleton of a cell). The following table gives diffusion coefficients of some common 
fluorophores in water at room temperature, and their excitation wavelengths. 

Fluorescent dye 

D(xlO" 10 m 2 s" 1 ) 

Excitation wavelength (nm) 


Rhodamine 6G 

2.8, 3.0, 4.14 ± 0.05 @ 25.00 °C 


[16] [17] [18] 

Rhodamine 110 




Tetramethyl rhodamine 







2.5,3.7 + 0.15 @ 25.00 °C 


[20] [21] 





1.96,4.35 @ 22.5±0.5 °C 


[22] [23] 


4.07±0.1 @ 25.00 °C 




4.26 ± 0.08 @ 25.00 °C 



2', 7'-difluorofluorescein (Oregon Green488) 

4.11+0.06 @ 25.00 °C 



Variations of FCS 

FCS almost always refers to the single point, single channel, temporal autocorrelation measurement, although the 
term "fluorescence correlation spectroscopy" out of its historical scientific context implies no such restriction. FCS 
has been extended in a number of variations by different researchers, with each extension generating another name 
(usually an acronym). 

Fluorescence cross-correlation spectroscopy (FCCS) 

FCS is sometimes used to study molecular interactions using differences in diffusion times (e.g. the product of an 
association reaction will be larger and thus have larger diffusion times than the reactants individually); however, 
FCS is relatively insensitive to molecular mass as can be seen from the following equation relating molecular mass 
to the diffusion time of globular particles (e.g. proteins): 

T D = ^(M)V3 

2kT v J 
where ??is the viscosity of the sample and j\/f is the molecular mass of the fluorescent species. In practice, the 

diffusion times need to be sufficiently different— a factor of at least 1.6— which means the molecular masses must 


differ by a factor of 4. Dual color fluorescence cross-correlation spectroscopy (FCCS) measures interactions by 
cross-correlating two or more fluorescent channels (one channel for each reactant), which distinguishes interactions 
more sensitively than FCS, particularly when the mass change in the reaction is small. 

Fluorescence correlation spectroscopy 177 

Brightness analysis methods (N&B, [28] PCH, [29] FIDA, [30] Cumulant Analysis 1311 ) 

Fluorescence cross correlation spectroscopy overcomes the weak dependence of diffusion rate on molecular mass by 
looking at multicolor coincidence. What about homo-interactions? The solution lies in brightness analysis. These 
methods use the heterogeneity in the intensity distribution of fluorescence to measure the molecular brightness of 
different species in a sample. Since dimers will contain twice the number of fluorescent labels as monomers, their 
molecular brightness will be approximately double that of monomers. As a result, the relative brightness is sensitive 
a measure of oligomerization. The average molecular brightness ( {V\ ) is related to the variance ( <j 2 ) and the 
average intensity ( (J\ ) as follows: 

Here f+ and £{ are the fractional intensity and molecular brigthness, respectively, of species <j . 

Two- and three- photon FCS excitation 

Several advantages in both spatial resolution and minimizing photodamage/photobleaching in organic and/or 
biological samples are obtained by two-photon or three-photon excitation FCS 


Another FCS based approach to studying molecular interactions uses fluorescence resonance energy transfer (FRET) 
instead of fluorescence, and is called FRET-FCS. With FRET, there are two types of probes, as with FCCS; 
however, there is only one channel and light is only detected when the two probes are very close — close enough to 
ensure an interaction. The FRET signal is weaker than with fluorescence, but has the advantage that there is only 
signal during a reaction (aside from autofluorescence). 

Image correlation spectroscopy (ICS) 

When the motion is slow (in biology, for example, diffusion in a membrane), getting adequate statistics from a 

single-point FCS experiment may take a prohibitively long time. More data can be obtained by performing the 

experiment in multiple spatial points in parallel, using a laser scanning confocal microscope. This approach has been 

called Image Correlation Spectroscopy (ICS) . The measurements can then be averaged together. 

Another variation of ICS performs a spatial autocorrelation on images, which gives information about the 
concentration of particles . The correlation is then averaged in time. 

A natural extension of the temporal and spatial correlation versions is spatio-temporal ICS (STICS) .In STICS 
there is no explicit averaging in space or time (only the averaging inherent in correlation). In systems with 
non-isotropic motion (e.g. directed flow, asymmetric diffusion), STICS can extract the directional information. A 
variation that is closely related to STICS (by the Fourier transform) is fc-space Image Correlation Spectroscopy 

(kICS). [42] 

There are cross-correlation versions of ICS as well. 

Scanning FCS variations 

Some variations of FCS are only applicable to serial scanning laser microscopes. Image Correlation Spectroscopy 
and its variations all were implemented on a scanning confocal or scanning two photon microscope, but transfer to 

other microscopes, like a spinning disk confocal microscope. Raster ICS (RICS) , and position sensitive FCS 

(PSFCS) incorporate the time delay between parts of the image scan into the analysis. Also, low dimensional 

scans (e.g. a circular ring) — only possible on a scanning system — can access time scales between single point 

and full image measurements. Scanning path has also been made to adaptively follow particles. 

Fluorescence correlation spectroscopy 178 

Spinning disk FCS, and spatial mapping 

Any of the image correlation spectroscopy methods can also be performed on a spinning disk confocal microscope, 
which in practice can obtain faster imaging speeds compared to a laser scanning confocal microscope. This approach 

has recently been applied to diffusion in a spatially varying complex environment, producing a pixel resolution map 

of a diffusion coefficient. . The spatial mapping of diffusion with FCS has subsequently been extended to the 

TIRF system. Spatial mapping of dynamics using correlation techniques had been applied before, but only at 

sparse points or at coarse resolution 

Total internal reflection FCS 

Total internal reflection fluorescence (TIRF) is a microscopy approach that is only sensitive to a thin layer near the 
surface of a coverslip, which greatly minimizes background fluorscence. FCS has been extended to that type of 
microscope, and is called TIR-FCS . Because the fluorescence intensity in TIRF falls off exponentially with 
distance from the coverslip (instead of as a Gaussian with a confocal), the autocorrelation function is different. 

Other fluorescent dynamical approaches 

There are two main non-correlation alternatives to FCS that are widely used to study the dynamics of fluorescent 

Fluorescence recovery after photobleaching (FRAP) 

In FRAP, a region is briefly exposed to intense light, irrecoverably photobleaching fluorophores, and the 
fluorescence recovery due to diffusion of nearby (non-bleached) fluorophores is imaged. A primary advantage of 
FRAP over FCS is the ease of interpreting qualitative experiments common in cell biology. Differences between cell 
lines, or regions of a cell, or before and after application of drug, can often be characterized by simple inspection of 
movies. FCS experiments require a level of processing and are more sensitive to potentially confounding influences 
like: rotational diffusion, vibrations, photobleaching, dependence on illumination and fluorescence color, inadequate 
statistics, etc. It is much easier to change the measurement volume in FRAP, which allows greater control. In 
practice, the volumes are typically larger than in FCS. While FRAP experiments are typically more qualitative, some 
researchers are studying FRAP quantitatively and including binding dynamics. A disadvantage of FRAP in cell 
biology is the free radical perturbation of the cell caused by the photobleaching. It is also less versatile, as it cannot 
measure concentration or rotational diffusion, or co-localization. FRAP requires a significantly higher concentration 
of fluorophores than FCS. 

Particle tracking 

In particle tracking, the trajectories of a set of particles are measured, typically by applying particle tracking 
algorithms to movies. [52] Particle tracking has the advantage that all the dynamical information is maintained in the 
measurement, unlike FCS where correlation averages the dynamics to a single smooth curve. The advantage is 
apparent in systems showing complex diffusion, where directly computing the mean squared displacement allows 
straightforward comparison to normal or power law diffusion. To apply particle tracking, the particles have to be 
distinguishable and thus at lower concentration than required of FCS. Also, particle tracking is more sensitive to 
noise, which can sometimes affect the results unpredictably. 

Fluorescence correlation spectroscopy 179 

See also 

• Confocal microscopy 

• Fluorescence cross-correlation spectroscopy ,FCCS 


• Dynamic light scattering 

• Diffusion coefficient 


[I] Magde, D., Elson, E. L., Webb, W. W. Thermodynamic fluctuations in a reacting system: Measurement by fluorescence correlation 
spectroscopy, (1972) Phys Rev Lett, 29, 705-708. 

[2] Ehrenberg, M., Rigler, R. Rotational brownian motion and fluorescence intensity fluctuations, (1974) Chem Phys, 4, 390^-01. 

[3] Elson, E. L., Magde, D. Fluorescence correlation spectroscopy I. Conceptual basis and theory, (1974) Biopolymers, 13, 1—27. 

[4] Magde, D., Elson, E. L., Webb, W. W. Fluorescence correlation spectroscopy II. An experimental realization, (1974) Biopolymers, 13, 29—61. 

[5] Thompson N L 1991 Topics in Fluorescence Spectroscopy Techniques vol 1, ed J R Lakowicz (New York: Plenum) pp 337—78 

[6] Rigler, R, U. Metsl, J. Widengren and P. Kask. Fluorescence correlation spectroscopy with high count rate and low background: analysis of 

translational diffusion. European Biophysics Journal (1993) 22(3), 159. 
[7] Eigen, M., Rigler, M. Sorting single molecules: application to diagnostics and evolutionary biotechnology, (1994) Proc. Natl. Acad. Sci. USA, 

[8] Rigler, M. Fluorescence correlations, single molecule detection and large number screening. Applications in biotechnology, (1995) J. 

Biotechnol., 41,177-186. 
[9] O. Krichevsky, G. Bonnet, "Fluorescence correlation spectroscopy: the technique and its applications," Rep. Prog. Phys. 65, 251—297 (2002). 
[10] Medina, M. A., Schwille, P. Fluorescence correlation spectroscopy for the detection and study of single molecules in biology, 

(2002)BioEssays, 24, 758-764. 

[II] Mayboroda, O. A., van Remoortere, A., Tanke H. J., Hokke, C. H., Deelder, A. M., A new approach for fluorescence correlation 
spectroscopy (FCS) based immunoassays, (2003), J. Biotechnol, 107, 185—192. 

[12] Hess, S.T., and W.W. Webb. 2002. Focal volume optics and experimental artifacts in confocal fluorescence correlation spectroscopy. 

Biophys. J. 83:2300-2317. 
[13] Banks, D. S., and C. Fradin. 2005. Anomalous diffusion of proteins due to molecular crowding. Biophys. J. 89:2960—2971. 
[14] Sengupta, P., K. Garai, J. Balaji, N. Periasamy, and S. Maiti. 2003. Measuring Size Distribution in Highly Heterogeneous Systems with 

Fluorescence Correlation Spectroscopy. Biophys. J. 84(3): 1977— 1984. 
[15] Kohler, R.H., P. Schwille, W.W. Webb, and M.R. Hanson. 2000. Active protein transport through plastid tubules: velocity quantified by 

fluorescence correlation spectroscopy. J Cell Sci 113(22):3921— 3930 
[16] Magde, D., Elson, E. L., Webb, W. W. Fluorescence correlation spectroscopy II. An experimental realization,(1974) Biopolymers, 13,29—61. 
[17] Berland, K. M. Detection of specific DNA sequences using dual-color two-photon fluorescence correlation spectroscopy. (2004),/ 

Biotechnol ,108(2), 127-136. 
[18] Muller, C.B., Loman, A., Pacheco, V., Koberling, F., Willbold, D., Richtering, W., Enderlein, J. Precise measurement of diffusion by 

multi-color dual-focus fluorescence correlation spectroscopy (2008), EPL, 83, 46001. 
[19] Pristinski, D., Kozlovskaya, V., Sukhishvili, S. A. Fluorescence correlation spectroscopy studies of diffusion of a weak polyelectrolyte in 

aqueous solutions. (2005), J. Chem. Phys., 122, 014907. 
[20] Widengren, J., Schwille, P., Characterization of photoinduced isomerization and back-isomerization of the cyanine dye Cy5 by fluorescence 

correlation spectroscopy. (2000), / Phys. Chem. A, 104, 6416-6428. 
[21] Loman, A., Dertinger, T., Koberling, F., Enderlein, J. Comparison of optical saturation effects in conventional and dual-focus fluorescence 

correlation spectroscopy (2008), Chem. Phys. Lett., 459, 18—21. 
[22] Pristinski, D., Kozlovskaya, V., Sukhishvili, S. A. Fluorescence correlation spectroscopy studies of diffusion of a weak polyelectrolyte in 

aqueous solutions. (2005), J. Chem. Phys., 122, 014907. 
[23] Petraaek, Z. k.; Schwille, P., Precise Measurement of Diffusion Coefficients using Scanning Fluorescence Correlation Spectroscopy. 

Biophys. J. 2008, 94 (4), 1437-1448. 
[24] Muller, C.B., Loman, A., Pacheco, V., Koberling, F., Willbold, D., Richtering, W., Enderlein, J. Precise measurement of diffusion by 

multi-color dual-focus fluorescence correlation spectroscopy (2008), EPL, 83, 46001. 
[25] Muller, C.B., Loman, A., Pacheco, V., Koberling, F., Willbold, D., Richtering, W., Enderlein, J. Precise measurement of diffusion by 

multi-color dual-focus fluorescence correlation spectroscopy (2008), EPL, 83, 46001. 
[26] Muller, C.B., Loman, A., Pacheco, V., Koberling, F., Willbold, D., Richtering, W., Enderlein, J. Precise measurement of diffusion by 

multi-color dual-focus fluorescence correlation spectroscopy (2008), EPL, 83, 46001. 
[27] Meseth, U., Wohland, T., Rigler, R., Vogel, H. Resolution of fluorescence correlation measurements. (1999) Biophys. J., 76, 1619—1631. 
[28] Digman, M. A., R. Dalai, A. F. Horwitz, and E. Gratton. Mapping the number of molecules and brightness in the laser scanning microscope. 

(2008) Biophys. J. 94, 2320-2332. 

Fluorescence correlation spectroscopy 180 

[29] Chen, Y., J. D. Muller, P. T. C. So, and E. Gratton. The photon counting histogram in fluorescence fluctuation spectroscopy. (1999) 

Biophys. J. 77, 553-567. 
[30] Kask, P., K. Palo, D. Ullmann, and K. Gall. Fluorescence-intensity distribution analysis and its application in biomolecular detection 

technology. (1999) Proc. Natl. Acad. Sci. U. S. A. 96, 13756-13761. 
[31] Muller, J. D. Cumulant analysis in fluorescence fluctuation spectroscopy. (2004) Biophys. J. 86, 3981—3992. 
[32] Qian, H., Elson, E.L. On the analysis of high order moments of fluorescence fluctuations. (1990) Biophys. J., 57, 375—380. 
[33] Diaspro, A., and Robello, M. (1999). Multi-photon Excitation Microscopy to Study Biosystems. European Microscopy and Analysis., 5:5—7. 
[34] Bagatolli, L.A., and Gratton, E. (2000). Two-photon fluorescence microscopy of coexisting lipid domains in giant unilamellar vesicles of 

binary phospholipid mixtures. Biophys J., 78:290—305. 
[35] Schwille, P., Haupts, U., Maiti, S., and Webb. W.(1999). Molecular dynamics in living cells observed by fluorescence correlation 

spectroscopy with one- and two- photon excitation. Biophysical Journal, 77(10):2251— 2265. 
[36] Near Infrared Microspectroscopy, Fluorescence Microspectroscopy, Infrared Chemical Imaging and High Resolution Nuclear Magnetic 

Resonance Analysis of Soybean Seeds, Somatic Embryos and Single Cells., Baianu, I.C. et al. 2004., In Oil Extraction and Analysis. , D. 

Luthria, Editor pp.241-273, AOCS Press., Champaign, IL. 
[37] Single Cancer Cell Detection by Near Infrared Microspectroscopy, Infrared Chemical Imaging and Fluorescence Microspectroscopy. 2004.1. 

C. Baianu, D. Costescu, N. E. Hofmann and S. S. Korban, q-bio/0407006 (July 2004) ( 
[38] K. Remaut, B. Lucas, K. Braeckmans, N.N. Sanders, S.C. De Smedt and J. Demeester, FRET-FCS as a tool to evaluate the stability of 

oligonucleotide drugs after intracellular delivery, J Control Rel 103 (2005) (1), pp. 259—271. 
[39] Wiseman, P. W., J. A. Squier, M. H. Ellisman, and K. R. Wilson. 2000. Two-photon video rate image correlation spectroscopy (ICS) and 

image cross-correlation spectroscopy (ICCS). J. Microsc. 200:14—25. 
[40] Petersen, N. O., P. L. Ho'ddelius, P. W. Wiseman, O. Seger, and K. E. Magnusson. 1993. Quantitation of membrane receptor distributions 

by image correlation spectroscopy: concept and application. Biophys. J. 65:1135—1146. 
[41] Hebert, B., S. Constantino, and P. W. Wiseman. 2005. Spatio-temporal image correlation spectroscopy (STICS): theory, verification and 

application to protein velocity mapping in living CHO cells. Biophys. J. 88:3601—3614. 
[42] Kolin, D.L., D. Ronis, and P.W. Wiseman. 2006. &-Space Image Correlation Spectroscopy: A Method for Accurate Transport Measurements 

Independent of Fluorophore Photophysics. Biophys. J. 91(8):3061— 3075. 
[43] Digman, M.A., P. Sengupta, P.W. Wiseman, CM. Brown, A.R. Horwitz, and E. Gratton. 2005. Fluctuation Correlation Spectroscopy with a 

Laser-Scanning Microscope: Exploiting the Hidden Time Structure. Biophys. J. 88(5):L33— 36. 
[44] Skinner, J. P., Y. Chen, and J.D. Mueller. 2005. Position-Sensitive Scanning Fluorescence Correlation Spectroscopy. Biophys. 

J.:biophysj. 105.060749. 
[45] Ruan, Q., M.A. Cheng, M. Levi, E. Gratton, and W.W. Mantulin. 2004. Spatial-temporal studies of membrane dynamics: scanning 

fluorescence correlation spectroscopy (SFCS). Biophys. J. 87:1260—1267. 
[46] A. Berglund and H. Mabuchi, "Tracking-FCS: Fluorescence correlation spectroscopy of individual particles," Opt. Express 13, 8069—8082 

[47] Sisan, D.R., R. Arevalo, C. Graves, R. McAllister, and J.S. Urbach. 2006. Spatially resolved fluorescence correlation spectroscopy using a 

spinning disk confocal microscope. Biophysical Journal 91(11):4241— 4252. 
[48] Kannan, B., L. Guo, T. Sudhaharan, S. Ahmed, I. Maruyama, and T. Wohland. 2007. Spatially resolved total internal reflection fluorescence 

correlation microscopy using an electron multiplying charge-coupled device camera. Analytical Chemistry 79(12):4463^470 
[49] Wachsmuth, M., W. Waldeck, and J. Langowski. 2000. Anomalous diffusion of fluorescent probes inside living cell nuclei investigated by 

spatially-resolved fluorescence correlation spectroscopy. J. Mol. Biol. 298(4):677— 689. 
[50] Lieto, A.M., and N.L. Thompson. 2004. Total Internal Reflection with Fluorescence Correlation Spectroscopy: Nonfluorescent Competitors. 

Biophys. J. 87(2): 1268-1278. 
[51] Sprague, B.L., and J.G. McNally. 2005. FRAP analysis of binding: proper and fitting. Trends in Cell Biology 15(2):84-91. 

Fluorescence correlation spectroscopy 181 

Further reading 

• Rigler R. and Widengren J. (1990). Ultrasensitive detection of single molecules by fluorescence correlation 
spectroscopy, BioScience (Ed. Klinge & Owman) p. 180 

• Oehlenschlager F., Schwille P. and Eigen M. (1996). Detection of HIV-1 RNA by nucleic acid sequence-based 
amplification combined with fluorescence correlation spectroscopy, Proc. Natl. Acad. Sci. USA 93:1281. 

External links 

• Single-molecule spectroscopic methods ( 1016/ 

• FCS Classroom ( 

• Stowers Institute FCS Tutorial ( 

• Cell Migration Consortium FCS Tutorial ( 

Fluorescence cross-correlation spectroscopy 

Fluorescence cross-correlation spectroscopy (FCCS) was introduced by Eigen and Rigler in 1994 and 
experimentally realized by Schwille in 1997. It extends the fluorescence correlation spectroscopy (FCS) procedure 
by introducing high sensitivity for distinguishing fluorescent particles which have a similar diffusion coefficient. 
FCCS uses two species which are independently labelled with two spectrally separated fluorescent probes. These 
fluorescent probes are excited and detected by two different laser light sources and detectors commonly known as 
green and red respectively. Both laser light beams are focused into the sample and tuned so that they overlap to form 
a superimposed confocal observation volume. 

The normalized cross-correlation function is defined for two fluorescent species Q and ft which are independent 
green, G and red, R channels as follows: 

where differential fluorescent signals SIq at a specific time, f and 5Ir at a delay time, rlater is correlated with 
each other. 


Cross-correlation curves are modeled according to a slightly more complicated mathematical function than applied 
in FCS. First of all, the effective superimposed observation volume in which the G and R channels form a single 
observation volume, V e ff eg m me solution: 

V eff , RG = ^ 2 «, G + < R )« G + <J 1/2 /2 3/2 

2 2 

where uj G and uj x R are radial parameters and ^ z ,Gand ^,i?are the axial parameters for the G and R 

channels respectively. 

The diffusion time, t~d,GR for a doubly (G and R) fluorescent species is therefore described as follows: 

r D,GR 

■ ,2 i , ,2 

8D GR 
where Dgr is the diffusion coefficient of the doubly fluorescent particle. 

The cross-correlation curve generated from diffusing doubly labelled fluorescent particles can be modelled in 
separate channels as follows: 

(< C G > Diff k (r)+ < C GR > Differ)) 

G g {t) = 1 

VeffGR(< C G > + < C GR >) S 

Fluorescence cross-correlation spectroscopy 


G R (r) = 1 

(< C R > Diff k {r)+ < C GR > Differ)) 

K//,ga(< C R > + < C GR >) 2 
In the ideal case, the cross-correlation function is proportional to the concentration of the doubly labeled fluorescent 

with Differ) = (1 ^ )fl 2( _^ )1/2 

Contrary to FCS, the intercept of the cross-correlation curve does not yield information about the doubly labelled 
fluorescent particles in solution. 

See also 

• Fluorescence correlation spectroscopy 

• Dynamic light scattering 

• Fluorescence spectroscopy 

• Diffusion coefficient 

External links 

FCS Classroom 




Forster resonance energy transfer 

Forster resonance energy transfer (abbreviated FRET), also 
known as fluorescence resonance energy transfer, resonance 
energy transfer (RET) or electronic energy transfer (EET), is a 

mechanism describing energy transfer between two chromophores. 

A donor chromophore, initially in its electronic excited state, may 
transfer energy to an acceptor chromophore (in proximity, 
typically less than 10 nm) through nonradiative dipole— dipole 
coupling. This mechanism is termed "Forster resonance energy 
transfer" and is named after the German scientist Theodor 
Forster. When both chromophores are fluorescent, the term 
"fluorescence resonance energy transfer" is often used instead, 

although the energy is not actually transferred by fluorescence. 


In order to avoid an erroneous interpretation of the 

phenomenon that (even when occurring between two fluorescent 

chromophores) is always a nonradiative transfer of energy, the 

name "Forster resonance energy transfer" is preferred to 

"fluorescence resonance energy transfer" — although the latter 

enjoys common usage in scientific literature, despite being 

incorrect. FRET is analogous to near field communication, in that the radius of interaction is much smaller than the 

Fluorescently-labeled guanosine 5 '-triphosphate 
hydrolase ARF reveals the protein's localization in the 
Golgi apparatus of a living macrophage. FRET studies 

revealed ARF activation in the Golgi and in the 
formation of phagosomes. 

Forster resonance energy transfer 183 

wavelength of light emitted. In the near field region, the excited chromophore emits a virtual photon that is instantly 
absorbed by a receiving chromophore. These virtual photons are undetectable, since their existence violates the 
conservation of energy and momentum, and hence FRET is known as a radiationless mechanism. From quantum 
electrodynamical calculations, it is determined that radiationless (FRET) and radiative energy transfer are the short- 
and long-range asymptotes of a single unified mechanism. 

Theoretical basis 

The FRET efficiency ( JTJ ) is the quantum yield of the energy transfer transition, i.e. the fraction of energy transfer 
event occurring per donor excitation event: 

E = 

kf + k ET + J2&i 

where fc^is the rate of energy transfer, kf the radiative decay rate and the ki are the rate constants of any other 

de-excitation pathway. 

The FRET efficiency depends on many parameters that can be grouped as follows: 

• The distance between the donor and the acceptor 

• The spectral overlap of the donor emission spectrum and the acceptor absorption spectrum. 

• The relative orientation of the donor emission dipole moment and the acceptor absorption dipole moment. 

£! depends on the donor-to-acceptor separation distance r with an inverse 6th power law due to the dipole-dipole 
coupling mechanism: 


1 + (r/fio) 6 

with i? being the Forster distance of this pair of donor and acceptor, i.e. the distance at which the energy transfer 
efficiency is 50%. The Forster distance depends on the overlap integral of the donor emission spectrum with the 
acceptor absorption spectrum and their mutual molecular orientation as expressed by the following equation: 

6 = 9Qo(lnlO)^J 
128 7T 5 n A N A 

where Qq1& the fluorescence quantum yield of the donor in the absence of the acceptor, K 2 i s the dipole orientation 

factor, n is the refractive index of the medium, JV4 is Avogadro's number, and J is the spectral overlap integral 
calculated as 

J = J / D (A)e A (A)A 4 dA 


where fi)is the normalized donor emission spectrum, and €jsj_s the acceptor molar extinction coefficient, k =2/3 is 
often assumed. This value is obtained when both dyes are freely rotating and can be considered to be isotropically 


oriented during the excited state lifetime. If either dye is fixed or not free to rotate, then k =2/3 will not be a valid 
assumption. In most cases, however, even modest reorientation of the dyes results in enough orientational averaging 


that k = 2/3 does not result in a large error in the estimated energy transfer distance due to the sixth power 

2 2 

dependence of R on k . Even when k is quite different from 2/3 the error can be associated with a shift in R and 
thus determinations of changes in relative distance for a particular system are still valid. Fluorescent proteins do not 
reorient on a timescale that is faster than their fluorescence lifetime. In this case < k < 4. 
The FRET efficiency relates to the quantum yield and the fluorescence lifetime of the donor molecule as follows: 

E = 1 - t^/td 
where tL and Tjy are the donor fluorescence lifetimes in the presence and absence of an acceptor, respectively, or 

E = \- F{,/F D 

Forster resonance energy transfer 


where F' and i^ D are the donor fluorescence intensities with and without an acceptor, respectively. 


Nessun segnale FRET 

436 ran 

5 nm 

480 nm 

Segnale FRET 

436 nm 


5 nm 


In fluorescence microscopy, fluorescence confocal laser scanning 
microscopy, as well as in molecular biology, FRET is a useful tool to 
quantify molecular dynamics in biophysics and biochemistry, such as 
protein-protein interactions, protein-DNA interactions, and protein 
conformational changes. For monitoring the complex formation 
between two molecules, one of them is labeled with a donor and the 
other with an acceptor, and these fluorophore-labeled molecules are 
mixed. When they are dissociated, the donor emission is detected upon 
the donor excitation. On the other hand, when the donor and acceptor 
are in proximity (1-10 nm) due to the interaction of the two molecules, 
the acceptor emission is predominantly observed because of the 
intermolecular FRET from the donor to the acceptor. For monitoring 
protein conformational changes, the target protein is labeled with a 
donor and an acceptor at two loci. When a twist or bend of the protein 
brings the change in the distance or relative orientation of the donor 
and acceptor, FRET change is observed. If a molecular interaction or a 
protein conformational change is dependent on ligand binding, this 
FRET technique is applicable to fluorescent indicators for the ligand 

FRET studies are scalable: the extent of energy transfer is often 

quantified from the milliliter scale of cuvette-based experiments to the 

femtoliter scale of microscopy-based experiments. This quantification 

can be based directly (sensitized emission method) on detecting two 

emission channels under two different excitation conditions (primarily 

donor and primarily acceptor). However, for robustness reasons, FRET 

quantification is most often based on measuring changes in 

fluorescence intensity or fluorescence lifetime upon changing the 

experimental conditions (e.g. a microscope image of donor emission is taken with the acceptor being present. The 

acceptor is then bleached, such that it is incapable of accepting energy transfer and another donor emission image is 

acquired. A pixel-based quantification using the second equation in the theory section above is then possible.) An 

alternative way of temporarily deactivating the acceptor is based on its fluorescence saturation. Exploiting 

polarisation characteristics of light, a FRET quantification is also possible with only a single camera exposure. 

<L, 480 nm 

C— 535 nm 

Example of FRET between CFP and YFP 
(Wavelength vs. Absorption): a fusion protein 

containing CFP and YFP excited at 440nm 
wavelength. The fluorescent emission peak of 

CFP overlaps the excitation peak of YFP. 

Because the two proteins are adjacent to each 

other, the energy transfer is significant— a large 

proportion of the energy from CFP is transferred 

to YFP and creates a much larger YFP emission 


CFP- YFP pairs 

The most popular FRET pair for biological use is a cyan fluorescent protein (CFP) - yellow fluorescent protein 
(YFP) pair. Both are color variants of green fluorescent protein (GFP). While labeling with organic fluorescent dyes 
requires troublesome processes of purification, chemical modification, and intracellular injection of a host protein, 
GFP variants can be easily attached to a host protein by genetic engineering. By virtue of GFP variants, the use of 
FRET techniques for biological research is becoming more and more popular. 

Forster resonance energy transfer 185 


A limitation of FRET is the requirement for external illumination to initiate the fluorescence transfer, which can lead 
to background noise in the results from direct excitation of the acceptor or to photobleaching. To avoid this 
drawback, Bioluminescence Resonance Energy Transfer (or BRET) has been developed. This technique uses a 
bioluminescent luciferase (typically the luciferase from Renilla reniformis) rather than CFP to produce an initial 
photon emission compatible with YFP. 

FRET and BRET are also the common tools in the study of biochemical reaction kinetics and molecular motors. 

Photobleaching FRET 

FRET efficiencies can also be inferred from the photobleaching rates of the donor in the presence and absence of an 
acceptor. This method can be performed on most fluorescence microscopes; one simply shines the excitation light 
(of a frequency that will excite the donor but not the acceptor significantly) on specimens with and without the 
acceptor fluorophore and monitors the donor fluorescence (typically separated from acceptor fluorescence using a 
bandpass filter) over time. The timescale is that of photobleaching, which is seconds to minutes, with fluorescence in 
each curve being given by 

(background) + (constant) * e " (time)/r P b 

where Tpbis the photobleaching decay time constant and depends on whether the acceptor is present or not. Since 
photobleaching consists in the permanent inactivation of excited fluorophores, resonance energy transfer from an 
excited donor to an acceptor fluorophore prevents the photobleaching of that donor fluorophore, and thus high FRET 
efficiency leads to a longer photobleaching decay time constant: 

E = 1 - r ph /r; h 

where T b and Tpbare the photobleaching decay time constants of the donor in the presence and in the absence of 

the acceptor, respectively. (Notice that the fraction is the reciprocal of that used for lifetime measurements). 
This technique was introduced by Jovin in 1989. Its use of an entire curve of points to extract the time constants 
can give it accuracy advantages over the other methods. Also, the fact that time measurements are over seconds 
rather than nanoseconds makes it easier than fluorescence lifetime measurements, and because photobleaching decay 
rates do not generally depend on donor concentration (unless acceptor saturation is an issue), the careful control of 
concentrations needed for intensity measurements is not needed. It is, however, important to keep the illumination 
the same for the with- and without-acceptor measurements, as photobleaching increases markedly with more intense 
incident light. 

Other methods 

A different, but related, mechanism is Dexter Electron Transfer. 

An alternative method to detecting protein-protein proximity is the bimolecular fluorescence complementation 
(BiFC) where two halves of a YFP are fused to a protein. When these two halves meet they form a fluorophore after 
about 60 s - 1 hr. [8] 


FRET has been applied in an experimental method for the detection of phosgene. In it, phosgene or rather 
triphosgene as a safe substitute serves as a linker between an acceptor and a donor coumarine (forming urea 
groups). The presence of phosgene is detected at 5x10 M with a typical FRET emission at 464 nm. 

Forster resonance energy transfer 






Coumarin derivative I (donor) 

o o 

Tri phosgene 

Et :l l 

Coumarine derivative II (acceptor) 

FRET is also used to study lipid rafts in cell membranes. 

See also 

• Resonant energy transfer is used for remotely powering equipment such as smart cards 

• Fluorescence principles / energy transfer (FRET): light emitted upon excitation by illumination at a shorter 

• Bioluminescence principles / energy transfer (BRET): light emitted by a chemical enzymatic reaction 
(Luminol/Peroxidase, Luciferin/Luciferase, Coelenterazine/Aequorin) 

• FCS 


[1] Inconspicuous Consumption: Uncovering the Molecular Pathways behind Phagocytosis. Inman M, PLoS Biology Vol. 4/6/2006, el90. 

[2] Forster T., Zwischenmolekulare Energiewanderung und Fluoreszenz, Ann. Physik 1948, 437, 55. doi:10. 1002/andp. 19484370105 
[3] Joseph R. Lakowicz, "Principles of Fluorescence Spectroscopy", Plenum Publishing Corporation, 2nd edition (July 1, 1999) 
[4] FRET microscopy tutorial from Olympus ( 
[5] D. L. Andrews, "A unified theory of radiative and radiationless molecular energy transfer", Chem. Phys. 1989, 135, 195-201. 

doi: 10.1016/0301-0104(89)87019-3 
[6] D. L. Andrews and D. S. Bradshaw, "Virtual photons, dipole fields and energy transfer: A quantum electrodynamical approach", Eur. J. Phys. 

2004, 25, 845-858. doi: 10.1088/0143-0807/25/6/017 
[7] Jovin, T.M. and Arndt-Jovin, D.J. FRET microscopy: Digital imaging of fluorescence resonance energy transfer. Application in cell biology. 

In Cell Structure and Function by Microspectrofluometry, E. Kohen, J. G. Hirschberg and J. S. Ploem. London: Academic Press, 1989. pp. 

[8] Hu CD, Chinenov Y, Kerppola TK (April 2002). "Visualization of interactions among bZIP and Rel family proteins in living cells using 

bimolecular fluorescence complementation". Mol. Cell 9 (4): 789-98. PMID 11983170. 
[9] Zhang H, Rudkevich DM (March 2007). "A FRET approach to phosgene detection". Chem. Commun. (Camb.) (12): 1238—9. 

doi: 10.1039/b6 14725a. PMID 17356768. 

Forster resonance energy transfer 


[10] Silvius, J.R. and Nabi, I.R. Fluorescence-quenching and resonance energy transfer studies of lipid microdomains in model and biological 
membranes. (Review) Molec. Membr. Bio. 2006, 23, 5-16. doi: 10. 1080/09687860500473002 

Further reading 

• doi: 10. 1016/S0959-440X(00)00190-1 Recent advances in FRET: distance determination in protein-DNA 
complexes. Current Opinion in Structural Biology 2001, 11(2), 201-207 

External links 

• Browser-based calculator to find the critical distance and FRET efficiency with known spectral overlap (http:// 

• FRET description ( 

• Fluorescence Resonance Energy Transfer (FRET) Microscopy ( 

• Lambert Instruments ( 

X-ray microscope 

An X-ray microscope uses electromagnetic radiation in the soft X-ray band to produce images of very small objects. 

Unlike visible light, X-rays do not reflect or refract easily, and they are invisible to the human eye. Therefore the 
basic process of an X-ray microscope is to expose film or use a charge-coupled device (CCD) detector to detect 
X-rays that pass through the specimen. It is a contrast imaging technology using the difference in absorption of soft 
x-ray in the water window region (wavelength region: 2.3 - 4.4 nm, photon energy region: 0.28 - 0.53 keV) by the 
carbon atom (main element composing the living cell) and the oxygen atom (main element for water). 

Early X-ray microscopes by Paul Kirkpatrick and Albert Baez used grazing-incidence reflective optics to focus the 
X-rays, which grazed X-rays off parabolic curved mirrors at a very high angle of incidence. An alternative method of 
focusing X-rays is to use a tiny fresnel zone plate of concentric gold or nickel rings on a silicon dioxide substrate. Sir 
Lawrence Bragg produced some of the first usable X-ray images with his apparatus in the late 1940's. 

In the 1950's Newberry produced a shadow X-ray microscope 
which placed the specimen between the source and a target plate, 
this became the basis for the first commercial X-ray microscopes 
from the General Electric Company. 

The Advanced Light Source (ALS)[1] in Berkeley CA is home to 
XM-1 (, a full field soft 
X-ray microscope operated by the Center for X-ray Optics [2] and 
dedicated to various applications in modern nanoscience, such as 
nanomagnetic materials, environmental and materials sciences and 
biology. XM-1 uses an X-ray lens to focus X-rays on a CCD, in a 
manner similar to an optical microscope. XM-1 still holds the 
world record in spatial resolution with Fresnel zone plates down to 
15nm and is able to combine high spatial resolution with a 
sub-lOOps time resolution to study e.g. ultrafast spin dynamics. 

Indirect drive laser inertial confinement fusion uses a 

"hohlraum" which is irradiated with laser beam cones 

from either side on it its inner surface to bathe a fusion 

microcapsule inside with smooth high intensity X-rays. 

The highest energy X-rays which penetrate the 

hohlraum can be visualized using an X-ray microscope 

such as here, where X-radiation is represented in 


X-ray microscope 


The ALS is also home to the world's first soft x-ray microscope designed for biological and biomedical research. 
This new instrument, XM-2 was designed and built by scientists from the National Center for X-ray Tomography 
(http://ncxt.lbl. gov). XM-2 is capable of producing 3-Dimensional tomograms of cells. 

Sources of soft X-rays suitable for microscopy, such as synchrotron radiation sources, have fairly low brightness of 
the required wavelengths, so an alternative method of image formation is scanning transmission soft X-ray 
microscopy. Here the X-rays are focused to a point and the sample is mechanically scanned through the produced 
focal spot. At each point the transmitted X-rays are recorded with a detector such as a proportional counter or an 
avalanche photodiode. This type of Scanning Transmission X-ray Microscope (STXM) was first developed by 
researchers at Stony Brook University and was employed at the National Synchrotron Light Source at Brookhaven 
National Laboratory. 

The resolution of X-ray microscopy lies between that of the optical microscope and the electron microscope. It has 
an advantage over conventional electron microscopy in that it can view biological samples in their natural state. 
Electron microscopy is widely used to obtain images with nanometer level resolution but the relatively thick living 
cell cannot be observed as the sample has to be chemically fixed, dehydrated, embedded in resin, then sliced ultra 
thin. However, it should be mentioned that cryo-electron microscopy allows the observation of biological specimens 
in their hydrated natural state, albeit embedded in water ice. Until now, resolutions of 30 nanometer are possible 
using the Fresnel zone plate lens which forms the image using the soft x-rays emitted from a synchrotron. Recently, 
more researchers have begun to use the soft x-rays emitted from laser-produced plasma rather than synchrotron 

Additionally, X-rays cause fluorescence in most materials, and these emissions can be analyzed to determine the 
chemical elements of an imaged object. Another use is to generate diffraction patterns, a process used in X-ray 
crystallography. By analyzing the internal reflections of a diffraction pattern (usually with a computer program), the 
three-dimensional structure of a crystal can be determined down to the placement of individual atoms within its 
molecules. X-ray microscopes are sometimes used for these analyses because the samples are too small to be 
analyzed in any other way. 

See also 

• Synchrotron X-ray tomographic microscopy 

External links 

• Application of X-ray microscopy in analysis of 
living hydrated cells 

• Hard X-ray microbeam experiments with a 
sputtered-sliced Fresnel zone plate and its 



Scientific applications of soft x-ray microscopy 


A square beryllium foil mounted in a steel case to be used as a 

window between a vacuum chamber and an X-ray microscope. 

Beryllium, due to its low Z number is highly transparent to X-rays. 

X-ray microscope 





[3] http://www.ncbi. 

[4] http://www.ncbi. 1972376 


Electron Microscope 

An electron microscope is a type of microscope that produces an 
electronically-magnified image of a specimen for detailed 
observation. The electron microscope (EM) uses a particle beam of 
electrons to illuminate the specimen and create a magnified image of 
it. The microscope has a greater resolving power than a 
light-powered optical microscope, because it uses electrons that have 
wavelengths about 100,000 times shorter than visible light (photons), 
and can achieve magnifications of up to 2,000,000x, whereas light 
microscopes are limited to 2000x magnification. 

The electron microscope uses electrostatic and electromagnetic 
"lenses" to control the electron beam and focus it to form an image. 
These lenses are analogous to, but different from the glass lenses of 
an optical microscope that form a magnified image by focusing light 
on or through the specimen. 

Electron microscopes are used to observe a wide range of biological 
and inorganic specimens including microorganisms, cells, large 
molecules, biopsy samples, metals, and crystals. Industrially, the 
electron microscope is primarily used for quality control and failure 
analysis in semiconductor device fabrication. 

High voltage 
Electron gn n 

First condenser Ions 

Condenser aperture 
Second condenser lens 

Condenser aperture 
Specimen holdcrand air-lock 
Objective lenses and aperture 

Electron beam 

Fluorescent screen andcamora 

Transmission Electron Microscope 

Diagram of a transmission electron microscope 

A 1973 Siemens electron microscope, Musee des 
Arts et Metiers, Paris 

Electron Microscope 



In 1931, the German physicist Ernst Ruska and German electrical 
engineer Max Knoll constructed the prototype electron microscope, 
capable of four-hundred-power magnification; the apparatus was a 
practical application of the principles of electron microscopy. Two 
years later, in 1933, Ruska built an electron microscope that exceeded 
the resolution attainable with an optical (lens) microscope. 
Moreover, Reinhold Rudenberg, the scientific director of 
Siemens-Schuckertwerke, obtained the patent for the electron 
microscope in May 1931. Family illness compelled the electrical 
engineer to devise an electrostatic microscope, because he wanted to 
make visible the poliomyelitis virus. 

In 1937, the Siemens company financed the development work of 
Ernst Ruska and Bodo von Borries, and employed Helmut Ruska 
(Ernst's brother) to develop applications for the microscope, especially 
with biologic specimens. Also in 1937, Manfred von Ardenne 

pioneered the scanning electron microscope. The first practical 
electron microscope was constructed in 1938, at the University of 
Toronto, by Eli Franklin Burton and students Cecil Hall, James Hillier, 
and Albert Prebus; and Siemens produced the first commercial 
Transmission Electron Microscope (TEM) in 1939. Although 
contemporary electron microscopes are capable of two million-power 
magnification, as scientific instruments, they remain based upon 
Ruska's prototype. 

Electron microscope constructed by Ernst Ruska 
in 1933 


Transmission electron microscope (TEM) 

The original form of electron microscope, the transmission electron microscope (TEM) uses a high voltage electron 
beam to create an image. The electrons are emitted by an electron gun, commonly fitted with a tungsten filament 
cathode as the electron source. The electron beam is accelerated by an anode typically at +100 keV (40 to 400 keV) 
with respect to the cathode, focused by electrostatic and electromagnetic lenses, and transmitted through the 
specimen that is in part transparent to electrons and in part scatters them out of the beam. When it emerges from the 
specimen, the electron beam carries information about the structure of the specimen that is magnified by the 
objective lens system of the microscope. The spatial variation in this information (the "image") is viewed by 
projecting the magnified electron image onto a fluorescent viewing screen coated with a phosphor or scintillator 
material such as zinc sulfide. The image can be photographically recorded by exposing a photographic film or plate 
directly to the electron beam, or a high-resolution phosphor may be coupled by means of a lens optical system or a 
fibre optic light-guide to the sensor of a CCD (charge-coupled device) camera. The image detected by the CCD may 
be displayed on a monitor or computer. 

Resolution of the TEM is limited primarily by spherical aberration, but a new generation of aberration correctors 
have been able to partially overcome spherical aberration to increase resolution. Hardware correction of spherical 
aberration for the High Resolution TEM (HRTEM) has allowed the production of images with resolution below 0.5 
Angstrom (50 picometres) at magnifications above 50 million times. The ability to determine the positions of 

atoms within materials has made the HRTEM an important tool for nano-technologies research and development 


Electron Microscope 


Scanning electron microscope (SEM) 

Unlike the TEM, where electrons of the high voltage beam carry the 
image of the specimen, the electron beam of the Scanning Electron 


Microscope (SEM) does not at any time carry a complete image of 
the specimen. The SEM produces images by probing the specimen 
with a focused electron beam that is scanned across a rectangular area 
of the specimen (raster scanning). At each point on the specimen the 
incident electron beam loses some energy, and that lost energy is 
converted into other forms, such as heat, emission of low-energy 
secondary electrons, light emission (cathodoluminescence) or x-ray 
emission. The display of the SEM maps the varying intensity of any of 
these signals into the image in a position corresponding to the position 
of the beam on the specimen when the signal was generated. In the 
SEM image of an ant shown at right, the image was constructed from 
signals produced by a secondary electron detector, the normal or 
conventional imaging mode in most SEMs. 

Generally, the image resolution of an SEM is about an order of magnitude poorer than that of a TEM. However, 
because the SEM image relies on surface processes rather than transmission, it is able to image bulk samples up to 
many centimetres in size and (depending on instrument design and settings) has a great depth of field, and so can 
produce images that are good representations of the three-dimensional shape of the sample. 

An image of an ant in a scanning electron 

Reflection electron microscope (REM) 

In the Reflection Electron Microscope (REM) as in the TEM, an electron beam is incident on a surface, but instead 
of using the transmission (TEM) or secondary electrons (SEM), the reflected beam of elastically scattered electrons 
is detected. This technique is typically coupled with Reflection High Energy Electron Diffraction (RHEED) and 
Reflection high-energy loss spectrum (RHELS). Another variation is Spin-Polarized Low-Energy Electron 
Microscopy (SPLEEM), which is used for looking at the microstructure of magnetic domains. 

Scanning transmission electron microscope (STEM) 

The STEM rasters a focused incident probe across a specimen that (as with the TEM) has been thinned to facilitate 
detection of electrons scattered through the specimen. The high resolution of the TEM is thus possible in STEM. The 
focusing action (and aberrations) occur before the electrons hit the specimen in the STEM, but afterward in the 
TEM. The STEMs use of SEM-like beam rastering simplifies annular dark-field imaging, and other analytical 
techniques, but also means that image data is acquired in serial rather than in parallel fashion. 

Low voltage electron microscope (LVEM) 

The low voltage electron microscope (LVEM) is a combination of SEM, TEM and STEM in one instrument, which 
operates at relatively low electron accelerating voltage of 5 kV. Low voltage increases image contrast which is 
especially important for biological specimens. This increase in contrast significantly reduces, or even eliminates the 
need to stain. Sectioned samples generally need to be thinner than they would be for conventional TEM (20-65 nm). 
Resolutions of a few nm are possible in TEM, SEM and STEM modes. 

Electron Microscope 


An insect coated in gold for viewing with a 
scanning electron microscope. 

Sample preparation 

Materials to be viewed under an electron microscope may require 
processing to produce a suitable sample. The technique required varies 
depending on the specimen and the analysis required: 

• Chemical fixation for biological specimens aims to stabilize the 
specimen's mobile macromolecular structure by chemical 
crosslinking of proteins with aldehydes such as formaldehyde and 
glutaraldehyde, and lipids with osmium tetroxide. 

• Cryofixation — freezing a specimen so rapidly, to liquid nitrogen or 
even liquid helium temperatures, that the water forms vitreous 
(non-crystalline) ice. This preserves the specimen in a snapshot of 
its solution state. An entire field called cryo-electron microscopy 
has branched from this technique. With the development of 
cryo-electron microscopy of vitreous sections (CEMOVIS), it is 
now possible to observe samples from virtually any biological 
specimen close to its native state. 

• Dehydration — freeze drying, or replacement of water with organic 

solvents such as ethanol or acetone, followed by critical point drying or infiltration with embedding resins. 

• Embedding, biological specimens — after dehydration, tissue for observation in the transmission electron 
microscope is embedded so it can be sectioned ready for viewing. To do this the tissue is passed through a 
'transition solvent' such as epoxy propane and then infiltrated with a resin such as Araldite epoxy resin; tissues 
may also be embedded directly in water-miscible acrylic resin. After the resin has been polymerised (hardened) 
the sample is thin sectioned (ultrathin sections) and stained - it is then ready for viewing. 

• Embedding, materials - after embedding in resin, the specimen is usually ground and polished to a mirror-like 
finish using ultra-fine abrasives. The polishing process must be performed carefully to minimize scratches and 
other polishing artifacts that reduce image quality. 

• Sectioning — produces thin slices of specimen, semitransparent to electrons. These can be cut on an 
ultramicrotome with a diamond knife to produce ultrathin slices about 60-90 nm thick. Disposable glass knives 
are also used because they can be made in the lab and are much cheaper. 

• Staining — uses heavy metals such as lead, uranium or tungsten to scatter imaging electrons and thus give contrast 
between different structures, since many (especially biological) materials are nearly "transparent" to electrons 
(weak phase objects). In biology, specimens are can be stained "en bloc" before embedding and also later after 
sectioning. Typically thin sections are stained for several minutes with an aqueous or alcoholic solution of uranyl 
acetate followed by aqueous lead citrate. 

• Freeze -fracture or freeze-etch — a preparation method particularly useful for examining lipid membranes and 
their incorporated proteins in "face on" view. The fresh tissue or cell suspension is frozen rapidly (cryofixed), 
then fractured by simply breaking or by using a microtome while maintained at liquid nitrogen temperature. The 
cold fractured surface (sometimes "etched" by increasing the temperature to about —100 °C for several minutes to 
let some ice sublime) is then shadowed with evaporated platinum or gold at an average angle of 45° in a high 
vacuum evaporator. A second coat of carbon, evaporated perpendicular to the average surface plane is often 
performed to improve stability of the replica coating. The specimen is returned to room temperature and pressure, 
then the extremely fragile "pre-shadowed" metal replica of the fracture surface is released from the underlying 
biological material by careful chemical digestion with acids, hypochlorite solution or SDS detergent. The 
still-floating replica is thoroughly washed from residual chemicals, carefully fished up on fine grids, dried then 
viewed in the TEM. 

Electron Microscope 


Ion Beam Milling — thins samples until they are transparent to electrons by firing ions (typically argon) at the 
surface from an angle and sputtering material from the surface. A subclass of this is Focused ion beam milling, 
where gallium ions are used to produce an electron transparent membrane in a specific region of the sample, for 
example through a device within a microprocessor. Ion beam milling may also be used for cross-section polishing 
prior to SEM analysis of materials that are difficult to prepare using mechanical polishing. 
Conductive Coating — an ultrathin coating of electrically-conducting material, deposited either by high vacuum 
evaporation or by low vacuum sputter coating of the sample. This is done to prevent the accumulation of static 
electric fields at the specimen due to the electron irradiation required during imaging. Such coatings include gold, 
gold/palladium, platinum, tungsten, graphite etc. and are especially important for the study of specimens with the 
scanning electron microscope. Another reason for coating, even when there is more than enough conductivity, is 
to improve contrast, a situation more common with the operation of a FESEM (field emission SEM). 


Electron microscopes are expensive to build and maintain, but the 
capital and running costs of confocal light microscope systems now 
overlaps with those of basic electron microscopes. They are dynamic 
rather than static in their operation, requiring extremely stable 
high-voltage supplies, extremely stable currents to each 
electromagnetic coil/lens, continuously-pumped high- or 
ultra-high-vacuum systems, and a cooling water supply circulation 
through the lenses and pumps. As they are very sensitive to vibration 
and external magnetic fields, microscopes designed to achieve high 
resolutions must be housed in stable buildings (sometimes 
underground) with special services such as magnetic field cancelling 
systems. Some desktop low voltage electron microscopes have TEM 
capabilities at very low voltages (around 5 kV) without stringent 
voltage supply, lens coil current, cooling water or vibration isolation 
requirements and as such are much less expensive to buy and far easier 
to install and maintain, but do not have the same ultra-high (atomic 
scale) resolution capabilities as the larger instruments. 

False-color SEM image of the filter setae of an 
Antarctic krill. (Raw electron microscope images 

carry no color information.) 

Pictured: First degree filter setae with V-shaped 

second degree setae pointing towards the inside 

of the feeding basket. The purple ball is 1 urn in 


The samples largely have to be viewed in vacuum, as the molecules 

that make up air would scatter the electrons. One exception is the environmental scanning electron microscope, 

which allows hydrated samples to be viewed in a low-pressure (up to 20 Torr/2.7 kPa), wet environment. 

Scanning electron microscopes usually image conductive or semi-conductive materials best. Non-conductive 
materials can be imaged by an environmental scanning electron microscope. A common preparation technique is to 
coat the sample with a several-nanometer layer of conductive material, such as gold, from a sputtering machine; 
however, this process has the potential to disturb delicate samples. 

Small, stable specimens such as carbon nanotubes, diatom frustules and small mineral crystals (asbestos fibres, for 
example) require no special treatment before being examined in the electron microscope. Samples of hydrated 
materials, including almost all biological specimens have to be prepared in various ways to stabilize them, reduce 
their thickness (ultrathin sectioning) and increase their electron optical contrast (staining). These processes may 
result in artifacts, but these can usually be identified by comparing the results obtained by using radically different 
specimen preparation methods. It is generally believed by scientists working in the field that as results from various 

preparation techniques have been compared and that there is no reason that they should all produce similar artifacts, 
it is reasonable to believe that electron microscopy features correspond with those of living cells. In addition, 

Electron Microscope 


higher-resolution work has been directly compared to results from X-ray crystallography, providing independent 
confirmation of the validity of this technique. Since the 1980s, analysis of cryofixed, vitrified specimens has also 

become increasingly used by scientists, further confirming the validity of this technique 

[12] [13] [14] 


Semiconductor and data storage 

• Circuit edit 

• Defect analysis 

• Failure analysis 

Biology and life sciences 

Diagnostic electron microscopy 


Protein localization 

Electron tomography 

Cellular tomography 

Cryo-electron microscopy 


Biological production and viral load monitoring 

Particle analysis 

Pharmaceutical QC 

Structural biology 

3D tissue imaging 




• Electron beam-induced deposition 

• Materials qualification 

• Materials and sample preparation 

• Nanoprototyping 

• Nanometrology 

• Device testing and characterization 


High-resolution imaging 

2D & 3D micro-characterization 

Macro sample to nanometer metrology 

Particle detection and characterization 

Direct beam-writing fabrication 

Dynamic materials experiments 

Sample preparation 


Mining (mineral liberation analysis) 


See also 

• Category:Electron microscope images 

• Field emission microscope 


• Scanning tunneling microscope 

• Transmission Electron Aberration-corrected Microscope 

• Ultramicroscopy (journal) 


[1] Ernst Ruska (1986). "Ernst Ruska Autobiography" ( 

Nobel Foundation. . Retrieved 2010-01-31. 
[2] Kruger DH, Schneck P, Gelderblom HR (May 2000). "Helmut Ruska and the visualisation of viruses" ( 

retrieve/pii/S0140673600022509). Lancet 355 (9216): 1713-7. doi:10.1016/S0140-6736(00)02250-9. PMID 10905259. . 
[3] M von Ardenne and D Beischer (1940). "Untersuchung von metalloxyd-rauchen mit dem universal-elektronenmikroskop" (in German). 

Zeitschrift Electrochemie 46: 270—277. 
[4] "James Hillier" ( Inventor of the Week: Archive. 2003-05-01. . Retrieved 2010-01-31. 
[5] Erni, Rolf; Rossell, MD; Kisielowski, C; Dahmen, U (2009). "Atomic-Resolution Imaging with a Sub-50-pm Electron Probe". Physical 

Review Letters 102 (9): 096101. doi:10.1103/PhysRevLett.l02.096101. PMID 19392535. 
[6] "The Scale of Things" ( Office of Basic Energy Sciences, U.S. Department of Energy. 

2006-05-26. . Retrieved 2010-01-31. 
[7] O'Keefe MA, Allard LF (pdf). Sub-Angstrom Electron Microscopy for Sub-Angstrom Nano-Metrology ( 

servlets/purl/821768-E3YVgN/native/821768.pdf). Information Bridge: DOE Scientific and Technical Information - Sponsored by OSTI. 

Retrieved 2010-01-31. 
[8] McMullan D (1993). "Scanning Electron Microscopy, 1928 - 1965" ( 

htm). . Cincinnati, OH. . Retrieved 2010-01-31. 
[9] "SPLEEM" ( National Center for Electron Microscopy (NCEM). . Retrieved 2010-01-31. 

Electron Microscope 195 

[10] Nebesafoval, Jana; Vancova, Marie (2007). "How to Observe Small Biological Objects in Low Voltage Electron Microscope" (http:// Microscopy and Microanalysis 13 (3): 248-249. . 
[11] Drummy, Lawrence, F.; Yang, Junyan; Martin, David C. (2004). "Low-voltage electron microscopy of polymer and organic molecular thin 

films". Ultramicroscopy 99 (4): 247-256. doi:10.1016/j.ultramic.2004.01.011. PMID 15149719. 
[12] Adrian, Marc; Dubochet, Jacques; Lepault, Jean; McDowall, Alasdair W. (1984). "Cryo-electron microscopy of viruses". Nature 308 

(5954): 32-36. doi:10.1038/308032a0. PMID 6322001. 
[13] Sabanay, I.; Arad, T.; Weiner, S.; Geiger, B. (1991). "Study of vitrified, unstained frozen tissue sections by cryoimmunoelectron 

microscopy" (http://jcs.biologists.Org/cgi/content/abstract/100/l/227). Journal of Cell Science 100 (1): 227-236. PMID 1795028. . 
[14] Kasas, S.; Dumas, G.; Dietler, G.; Catsicas, S.; Adrian, M. (2003). "Vitrification of cryoelectron microscopy specimens revealed by 

high-speed photographic imaging". Journal of Microscopy 211 (1): 48-53. doi:10.1046/j. 1365-2818.2003.01 193.x. 

External links 

• Science Aid: Electron Microscopy ( High School 
(GCSE, A Level) resource 

• Cell Centered Database - Electron microscopy data (http://ccdb.ucsd. edu/sand/main?typeid=4& 


• Nanohedron.comlNano image gallery ( beautiful images generated with electron 

• electron microscopy ( Website of the ETH Zurich: Very good graphics and 
images, which illustrate various procedures. 

• Environmental Scanning Electron Microscope (ESEM) ( 

• X-ray element analysis in electron microscope ( — Information 
portal with X-ray microanalysis and EDX contents 


• John H.L. Watson: Very early Electron Microscopy in the Department of Physics, the University of Toronto — A 
personal recollection ( 

• Rubin Borasky Electron Microscopy Collection, 1930-1988 ( 
htm) Archives Center, National Museum of American History, Smithsonian Institution. 


• The Royal Microscopical Society, Electron Microscopy Section (UK) ( 

• Albert Lleal micrograph. Scanning Electron Micrograph Coloured SEM ( 
microphotography . html) 

Atomic force microscope 


Atomic force microscope 

Atomic force microscopy (AFM) or scanning force 
microscopy (SFM) is a very high-resolution type of 
scanning probe microscopy, with demonstrated resolution 
on the order of fractions of a nanometer, more than 1000 
times better than the optical diffraction limit. The 
precursor to the AFM, the scanning tunneling 
microscope, was developed by Gerd Binnig and Heinrich 
Rohrer in the early 1980s at IBM Research - Zurich, a 
development that earned them the Nobel Prize for Physics 
in 1986. Binnig, Quate and Gerber invented the first 
atomic force microscope (also abbreviated as AFM) in 
1986. The first commercially available atomic force 
microscope was introduced in 1989. The AFM is one of 
the foremost tools for imaging, measuring, and 
manipulating matter at the nanoscale. The information is 
gathered by "feeling" the surface with a mechanical 
probe. Piezoelectric elements that facilitate tiny but 
accurate and precise movements on (electronic) command 
enable the very precise scanning. In some variations, 
electric potentials can also be scanned using conducting 
cantilevers. In newer more advanced versions, currents 
can even be passed through the tip to probe the electrical 
conductivity or transport of the underlying surface, but 
this is much more challenging with very few groups 
reporting reliable data. 

Basic principles 

A commercial AFM setup 

Detector and 



Sample Surface 

Cantilever & Tip 

PZT Scanner 

Block diagram of atomic force microscope 

Electron micrograph of a used AFM cantilever image width -100 micrometers. 

and -30 micrometers 

Atomic force microscope 197 

The AFM consists of a cantilever with a sharp tip (probe) at its end that is used to scan the specimen surface. The 
cantilever is typically silicon or silicon nitride with a tip radius of curvature on the order of nanometers. When the tip 
is brought into proximity of a sample surface, forces between the tip and the sample lead to a deflection of the 
cantilever according to Hooke's law. Depending on the situation, forces that are measured in AFM include 
mechanical contact force, van der Waals forces, capillary forces, chemical bonding, electrostatic forces, magnetic 
forces (see magnetic force microscope, MFM), Casimir forces, solvation forces, etc. Along with force, additional 
quantities may simultaneously be measured through the use of specialized types of probe (see scanning thermal 
microscopy, photothermal microspectroscopy, etc.). Typically, the deflection is measured using a laser spot reflected 
from the top surface of the cantilever into an array of photodiodes. Other methods that are used include optical 
interferometry, capacitive sensing or piezoresistive AFM cantilevers. These cantilevers are fabricated with 
piezoresistive elements that act as a strain gauge. Using a Wheatstone bridge, strain in the AFM cantilever due to 
deflection can be measured, but this method is not as sensitive as laser deflection or interferometry. 

If the tip was scanned at a constant height, a risk would exist that the tip collides with the surface, causing damage. 
Hence, in most cases a feedback mechanism is employed to adjust the tip-to-sample distance to maintain a constant 
force between the tip and the sample. Traditionally, the sample is mounted on a piezoelectric tube, that can move the 
sample in the z direction for maintaining a constant force, and the x and y directions for scanning the sample. 
Alternatively a 'tripod' configuration of three piezo crystals may be employed, with each responsible for scanning in 
the x,y and z directions. This eliminates some of the distortion effects seen with a tube scanner. In newer designs, the 
tip is mounted on a vertical piezo scanner while the sample is being scanned in X and Y using another piezo block. 
The resulting map of the area s =f(x,y) represents the topography of the sample. 

The AFM can be operated in a number of modes, depending on the application. In general, possible imaging modes 
are divided into static (also called contact) modes and a variety of dynamic (or non-contact) modes where the 
cantilever is vibrated. 

Imaging modes 

The primary modes of operation for an AFM are static mode and dynamic mode. In static mode, the cantilever is 
"dragged" across the surface of the sample and the contours of the surface are measured directly using the deflection 
of the cantilever. In the dynamic mode, the cantilever is externally oscillated at or close to its fundamental resonance 
frequency or a harmonic. The oscillation amplitude, phase and resonance frequency are modified by tip-sample 
interaction forces. These changes in oscillation with respect to the external reference oscillation provide information 
about the sample's characteristics. 

Contact mode 

In the static mode operation, the static tip deflection is used as a feedback signal. Because the measurement of a 
static signal is prone to noise and drift, low stiffness cantilevers are used to boost the deflection signal. However, 
close to the surface of the sample, attractive forces can be quite strong, causing the tip to "snap-in" to the surface. 
Thus static mode AFM is almost always done in contact where the overall force is repulsive. Consequently, this 
technique is typically called "contact mode". In contact mode, the force between the tip and the surface is kept 
constant during scanning by maintaining a constant deflection. 

Atomic force microscope 


Fiitititiack Loop Maintains Const;in1 
Osnillminn Ampliturjp nr Frpqnpnr.y 

Non-contact mode 

In this mode, the tip of the cantilever 
does not contact the sample surface. 
The cantilever is instead oscillated at a 
frequency slightly above its resonance 
frequency where the amplitude of 
oscillation is typically a few 
nanometers (<10nm). The van der 
Waals forces, which are strongest from 
1 nm to 10 nm above the surface, or 
any other long range force which 
extends above the surface acts to 
decrease the resonance frequency of 
the cantilever. This decrease in 
resonance frequency combined with 
the feedback loop system maintains a 
constant oscillation amplitude or 
frequency by adjusting the average 
tip-to-sample distance. Measuring the 
tip-to-sample distance at each (x,y) 
data point allows the scanning software to construct a topographic image of the sample surface. 



One Hla:lG r 
■\rrslitudc or 


AFM - non-contact mode 

Non-contact mode AFM does not suffer from tip or sample degradation effects that are sometimes observed after 
taking numerous scans with contact AFM. This makes non-contact AFM preferable to contact AFM for measuring 
soft samples. In the case of rigid samples, contact and non-contact images may look the same. However, if a few 
monolayers of adsorbed fluid are lying on the surface of a rigid sample, the images may look quite different. An 
AFM operating in contact mode will penetrate the liquid layer to image the underlying surface, whereas in 
non-contact mode an AFM will oscillate above the adsorbed fluid layer to image both the liquid and surface. 

Schemes for dynamic mode operation include frequency modulation and the more common amplitude modulation. 
In frequency modulation, changes in the oscillation frequency provide information about tip-sample interactions. 
Frequency can be measured with very high sensitivity and thus the frequency modulation mode allows for the use of 
very stiff cantilevers. Stiff cantilevers provide stability very close to the surface and, as a result, this technique was 
the first AFM technique to provide true atomic resolution in ultra-high vacuum conditions. 

In amplitude modulation, changes in the oscillation amplitude or phase provide the feedback signal for imaging. In 
amplitude modulation, changes in the phase of oscillation can be used to discriminate between different types of 
materials on the surface. Amplitude modulation can be operated either in the non-contact or in the intermittent 
contact regime. In dynamic contact mode, the cantilever is oscillated such that the separation distance between the 
cantilever tip and the sample surface is modulated. 

Amplitude modulation has also been used in the non-contact regime to image with atomic resolution by using very 
stiff cantilevers and small amplitudes in an ultra-high vacuum environment. 

Atomic force microscope 


Tapping mode 

In ambient conditions, most samples develop a liquid meniscus 
layer. Because of this, keeping the probe tip close enough to the 
sample for short-range forces to become detectable while preventing 
the tip from sticking to the surface presents a major problem for 
non-contact dynamic mode in ambient conditions. Dynamic contact 
mode (also called intermittent contact or tapping mode) was 
developed to bypass this problem 


Single polymer chains (0.4 nm thick) recorded in a 

tapping mode under aqueous media with different 

pH. [2] 

In tapping mode, the cantilever is driven to oscillate up and down at 

near its resonance frequency by a small piezoelectric element 

mounted in the AFM tip holder similar to non-contact mode. 

However, the amplitude of this oscillation is greater than 10 nm, 

typically 100 to 200 nm. Due to the interaction of forces acting on 

the cantilever when the tip comes close to the surface, Van der 

Waals force, dipole-dipole interaction, electrostatic forces, etc cause 

the amplitude of this oscillation to decrease as the tip gets closer to 

the sample. An electronic servo uses the piezoelectric actuator to 

control the height of the cantilever above the sample. The servo 

adjusts the height to maintain a set cantilever oscillation amplitude as the cantilever is scanned over the sample. A 

tapping AFM image is therefore produced by imaging the force of the intermittent contacts of the tip with the sample 


This method of "tapping" lessens the damage done to the surface and the tip compared to the amount done in contact 
mode. Tapping mode is gentle enough even for the visualization of supported lipid bilayers or adsorbed single 
polymer molecules (for instance, 0.4 nm thick chains of synthetic polyelectrolytes) under liquid medium. With 
proper scanning parameters, the conformation of single molecules can remain unchanged for hours. 

AFM cantilever deflection measurement 

Solid State LaserDiode 



Laser light from a solid state diode is 

reflected off the back of the cantilever 

and collected by a position sensitive 

detector (PSD) consisting of two 

closely spaced photodiodes whose 

output signal is collected by a 

differential amplifier. Angular 

displacement of cantilever results in 

one photodiode collecting more light 

than the other photodiode, producing 

an output signal (the difference 

between the photodiode signals 

normalized by their sum) which is 

proportional to the deflection of the 

cantilever. It detects cantilever deflections <10 nm (thermal noise limited). A long beam path (several centimeters) 

amplifies changes in beam angle. 

Split Photodiode Detector 


AFM beam deflection detection 

Atomic force microscope 


Force spectroscopy 

Another major application of AFM (besides imaging) is force spectroscopy, the direct measurement of tip-sample 
interaction forces as a function of the gap between the tip and sample (the result of this measurement is called a 
force-distance curve). For this method, the AFM tip is extended towards and retracted from the surface as the 
deflection of the cantilever is monitored as a function of piezoelectric displacement. These measurements have been 

used to measure nanoscale contacts, atomic bonding, Van der Waals forces, and Casimir forces, dissolution forces in 

liquids and single molecule stretching and rupture forces. Furthermore, AFM was used to measure in aqueous 

environment dispersion force due to polymer adsorbed on the substarte. Forces of the order of a few piconewtons 

can now be routinely measured with a vertical distance resolution of better than 0.1 nanometer. Force spectroscopy 

can be performed with either static or dynamic modes. In dynamic modes, information about the cantilever vibration 

is monitored in addition to the static deflection. 

Problems with the technique include no direct measurement of the tip-sample separation and the common need for 
low stiffness cantilevers which tend to 'snap' to the surface. The snap-in can be reduced by measuring in liquids or by 
using stiffer cantilevers, but in the latter case a more sensitive deflection sensor is needed. By applying a small dither 
to the tip, the stiffness (force gradient) of the bond can be measured as well 


Identification of individual surface atoms 

The AFM can be used to image and manipulate atoms and structures 
on a variety of surfaces. The atom at the apex of the tip "senses" 
individual atoms on the underlying surface when it forms incipient 
chemical bonds with each atom. Because these chemical interactions 
subtly alter the tip's vibration frequency, they can be detected and 
mapped. This principle was used to distinguish between atoms of 
silicon, tin and lead on an alloy surface, by comparing these 'atomic 
fingerprints' to values obtained from large-scale density functional 
theory (DFT) simulations 


The trick is to first measure these forces precisely for each type of 
atom expected in the sample, and then to compare with forces given by 
DFT simulations. The team found that the tip interacted most strongly 
with silicon atoms, and interacted 23% and 41% less strongly with tin 
and lead atoms, respectively. Thus, each different type of atom can be 
identified in the matrix as the tip is moved across the surface. 

Such a technique has been used now in biology and extended recently to cell biology. Forces corresponding to (i) the 
unbinding of receptor ligand couples (ii) unfolding of proteins (iii) cell adhesion at single cell scale have been 

The atoms of a sodium chloride crystal viewed 
with an atomic force microscope 

Atomic force microscope 


Advantages and disadvantages 

Just like any other tool, an AFM's usefulness has limitations. When 
determining whether or not analyzing a sample with an AFM is 
appropriate, there are various advantages and disadvantages that must 
be considered. 


- lcf o^°P° ,„,»« 

The first atomic force microscope 

AFM has several advantages over the scanning electron microscope 

(SEM). Unlike the electron microscope which provides a 

two-dimensional projection or a two-dimensional image of a sample, 

the AFM provides a three-dimensional surface profile. Additionally, 

samples viewed by AFM do not require any special treatments (such as 

metal/carbon coatings) that would irreversibly change or damage the sample. While an electron microscope needs an 

expensive vacuum environment for proper operation, most AFM modes can work perfectly well in ambient air or 

even a liquid environment. This makes it possible to study biological macromolecules and even living organisms. In 

principle, AFM can provide higher resolution than SEM. It has been shown to give true atomic resolution in 

ultra-high vacuum (UHV) and, more recently, in liquid environments. High resolution AFM is comparable in 

resolution to scanning tunneling microscopy and transmission electron microscopy. 


A disadvantage of AFM compared with the scanning electron microscope (SEM) is the single scan image size. In 
one pass, the SEM can image an area on the order of square millimeters with a depth of field on the order of 
millimeters. Whereas the AFM can only image a maximum height on the order of 10-20 micrometers and a 
maximum scanning area of about 150x150 micrometers. One method of improving the scanned area size for AFM is 
by using parallel probes in a fashion similar to that of millipede data storage. 

The scanning speed of an AFM is also a limitation. Traditionally, an AFM cannot scan images as fast as a SEM, 
requiring several minutes for a typical scan, while a SEM is capable of scanning at near real-time, although at 
relatively low quality. The relatively slow rate of scanning during AFM imaging often leads to thermal drift in the 
image making the AFM microscope less suited for measuring accurate distances between topographical 

features on the image. However, several fast-acting designs were suggested to increase microscope scanning 

productivity including what is being termed videoAFM (reasonable quality images are being obtained with 
videoAFM at video rate: faster than the average SEM). To eliminate image distortions induced by thermal drift, 
several methods have been introduced. 


AFM images can also be affected by hysteresis of the piezoelectric material and cross-talk between the x, y, z 
axes that may require software enhancement and filtering. Such filtering could "flatten" out real topographical 
features. However, newer AFMs utilize closed-loop scanners which practically eliminate these problems. Some 
AFMs also use separated orthogonal scanners (as opposed to a single tube) which also serve to eliminate part of the 
cross-talk problems. 

As with any other imaging technique, there is the possibility of image artifacts, which could be induced by an 
unsuitable tip, a poor operating environment, or even by the sample itself. These image artifacts are unavoidable 
however, their occurrence and effect on results can be reduced through various methods. 

Due to the nature of AFM probes, they cannot normally measure steep walls or overhangs. Specially made 
cantilevers and AFMs can be used to modulate the probe sideways as well as up and down (as with dynamic contact 
and non-contact modes) to measure sidewalls, at the cost of more expensive cantilevers, lower lateral resolution and 
additional artifacts. 

Atomic force microscope 202 

Piezoelectric scanners 

AFM scanners are made from piezoelectric material, which expands and contracts proportionally to an applied 
voltage. Whether they elongate or contract depends upon the polarity of the voltage applied. The scanner is 
constructed by combining independently operated piezo electrodes for X, Y, and Z into a single tube, forming a 
scanner which can manipulate samples and probes with extreme precision in 3 dimensions. 

Scanners are characterized by their sensitivity which is the ratio of piezo movement to piezo voltage, i.e., by how 
much the piezo material extends or contracts per applied volt. Because of differences in material or size, the 
sensitivity varies from scanner to scanner. Sensitivity varies non-linearly with respect to scan size. Piezo scanners 
exhibit more sensitivity at the end than at the beginning of a scan. This causes the forward and reverse scans to 


behave differently and display hysteresis between the two scan directions. This can be corrected by applying a 
non-linear voltage to the piezo electrodes to cause linear scanner movement and calibrating the scanner 

The sensitivity of piezoelectric materials decreases exponentially with time. This causes most of the change in 
sensitivity to occur in the initial stages of the scanner's life. Piezoelectric scanners are run for approximately 48 
hours before they are shipped from the factory so that they are past the point where they may have large changes in 
sensitivity. As the scanner ages, the sensitivity will change less with time and the scanner would seldom require 

See also 

• Frictional force mapping 

• Scanning tunneling microscope 

• Scanning probe microscopy 

• Scanning voltage microscopy 

• Surface force apparatus 


[1] Giessibl, Franz J. (2003). "Advances in atomic force microscopy". Reviews of Modem Physics 75: 949. doi:10.1103/RevModPhys.75.949. 
[2] Roiter, Y; Minko, S (Nov 2005). "AFM single molecule experiments at the solid-liquid interface: in situ conformation of adsorbed flexible 

polyelectrolyte chains". Journal of the American Chemical Society 127 (45): 15688-9. doi:10.1021/ja0558239. ISSN 0002-7863. 

PMID 16277495. 
[3] Zhong, Q (1993). "Fractured polymer/silica fiber surface studied by tapping mode atomic force microscopy". Surface Science Letters 290: 

L688. doi:10.1016/0167-2584(93)90906-Y. 
[4] Hinterdorfer, P; Dufrene, Yf (May 2006). "Detection and localization of single molecular recognition events using atomic force microscopy". 

Nature methods 3 (5): 347-55. doi:10.1038/nmeth871. ISSN 1548-7091. PMID 16628204. 
[5] J Colloid Interface Sci. 2010 Jul 1;347(1): 15-24. Epub 2010 Mar 7. Interaction of cement model systems with superplasticizers investigated 

by atomic force microscopy, zeta potential, and adsorption measurements. Ferrari L., Kaufmann J., Winnefeld F., Plank J., 
[6] "Force measurements with the atomic force microscope: Technique, interpretation and applications". Surface Science Reports 59: 1—152. 

[7] M. Hoffmann, Ahmet Oral, Ralph A. G, Peter (2001). "Direct measurement of interatomic force gradients using an ultra-low-amplitude 

atomic force microscope". Proceedings of the Royal Society a Mathematical Physical and Engineering Sciences 457: 1161. 

[8] Sugimoto, Y; Pou, P; Abe, M; Jelinek, P; Perez, R; Morita, S; Custance, O (Mar 2007). "Chemical identification of individual surface atoms 

by atomic force microscopy". Nature 446 (7131): 64-7. doi:10.1038/nature05530. ISSN 0028-0836. PMID 17330040. 
[9] R. V. Lapshin (2004). "Feature-oriented scanning methodology for probe microscopy and nanotechnology" ( 

homepages/lapshin/publications.htm#feature2004) (PDF). Nanotechnology (UK: IOP) 15 (9): 1135-1151. doi: 10. 1088/0957-4484/15/9/006. 

ISSN 0957-4484. . 
[10] R. V. Lapshin (2007). "Automatic drift elimination in probe microscope images based on techniques of counter-scanning and topography 

feature recognition" (http://www.nanoworld.Org/homepages/lapshin/publications.htm#automatic2007) (PDF). Measurement Science and 

Technology (UK: IOP) 18 (3): 907-927. doi: 10.1088/0957-0233/18/3/046. ISSN 0957-0233. . 

Atomic force microscope 203 

[11] G. Schitter, M. J. Rost (2008). "Scanning probe microscopy at video-rate" ( 

scanning-probe-microscopy-at-videorate/) (PDF). Materials Today (UK: Elsevier) 11 (special issue): 40—48. 

doi:10.1016/S1369-7021(09)70006-9. ISSN 1369-7021. . 
[12] R. V. Lapshin, O. V. Obyedkov (1993). "Fast-acting piezoactuator and digital feedback loop for scanning tunneling microscopes" (http:// (PDF). Review of Scientific Instruments (USA: AIP) 64 (10): 

2883-2887. doi:10.1063/l. 1144377. ISSN 0034-6748. . 
[13] R. V. Lapshin (1995). "Analytical model for the approximation of hysteresis loop and its application to the scanning tunneling microscope" 

(http://www.nanoworld.Org/homepages/lapshin/publications.htm#analyticall995) (PDF). Review of Scientific Instruments (USA: AIP) 66 

(9): 4718^730. doi: 10.1063/1. 1145314. ISSN 0034-6748. . ( is available). 
[14] R. V. Lapshin (1998). "Automatic lateral calibration of tunneling microscope scanners" ( 

publications.htm#automaticl998) (PDF). Review of Scientific Instruments (USA: AIP) 69 (9): 3268-3276. doi: 10.1063/1.1 149091. 

ISSN 0034-6748. . 

External links 

• ME 597/PHYS 570: Fundamentals of Atomic Force Microscopy ( 

• DoITPoMS Teaching and Learning Package - Atomic Force Microscopy ( 

Further reading 

• SPM - Scanning Probe Microscopy Website ( 

• Atomic Force Microscopy resource library ( 

• R. W. Carpick and M. Salmeron, Scratching the surface: Fundamental investigations of tribology with atomic 
force microscopy (, Chemical Reviews, vol. 97, iss. 4, pp. 1163—1194 

Neutron scattering 

Neutron scattering encompasses all scientific techniques whereby the deflection of neutron radiation is used as a 
scientific probe. Neutrons readily interact with atomic nuclei and magnetic fields from unpaired electrons, making a 
useful probe of both structure and magnetic order. Neutron Scattering falls into two basic categories - elastic and 
inelastic. Elastic scattering is when a neutron interacts with a nucleus or electronic magnetic field but does not leave 
it in an excited state, meaning the emitted neutron has the same energy as the injected neutron. Scattering processes 
that involve an energetic excitation or relaxation by the neutron are inelastic: the injected neutron's energy is used or 
increased to create an excitation or by absorbing the excess energy from a relaxation, and consequently the emitted 
neutron's energy is reduced or increased respectively. 

For several good reasons, moderated neutrons provide an ideal tool for the study of almost all forms of condensed 
matter. Firstly, they are readily produced at a nuclear research reactor or a spallation source. Normally in such 
processes neutrons are however produced with much higher energies than are needed. Therefore moderators are 
generally used which slow the neutrons down and therefore produce wavelengths that are comparable to the atomic 
spacing in solids and liquids, and kinetic energies that are comparable to those of dynamic processes in materials. 
Moderators can be made from aluminium and filled with liquid hydrogen (for very long wavelength neutrons) or 

7 8 

liquid methane (for shorter wavelength neutrons). Fluxes of 10 /s - 10 /s are not atypical in most neutron sources 
from any given moderator. 

The neutrons cause pronounced interference and energy transfer effects in scattering experiments. Unlike an x-ray 
photon with a similar wavelength, which interacts with the electron cloud surrounding the nucleus, neutrons interact 
with the nucleus itself. Because the neutron is an electrically neutral particle, it is deeply penetrating, and is therefore 
more able to probe the bulk material. Consequently, it enables the use of a wide range of sample environments that 

Neutron scattering 204 

are difficult to use with synchrotron x-ray sources. It also has the advantage that the cross sections for interaction do 
not increase with atomic number as they do with radiation from a synchrotron x-ray source. Thus neutrons can be 
used to analyse materials with low atomic numbers like proteins and surfactants. This can be done at synchrotron 
sources but very high intensities are needed which may cause the structures to change. Moreover, the nucleus 
provides a very short range, isotropic potential varying randomly from isotope to isotope, making it possible to tune 
the nuclear scattering contrast to suit the experiment: 

The neutron has an additional advantage over the x-ray photon in the study of condensed matter. It readily interacts 
with internal magnetic fields in the sample. In fact, the strength of the magnetic scattering signal is often very similar 
to that of the nuclear scattering signal in many materials, which allows the simultaneous exploration of both nuclear 
and magnetic structure. Because the neutron scattering amplitude can be measured in absolute units, both the 
structural and magnetic properties as measured by neutrons can be compared quantitatively with the results of other 
characterisation techniques. 

See also 

• Neutron diffraction 

• Small angle neutron scattering 

• Neutron Reflectometry 

• Inelastic neutron scattering 

• neutron triple-axis spectrometry 

• neutron time-of-flight scattering 

• neutron backscattering 

• neutron spin echo 

• neutron resonance spin echo 

• Neutron scattering facilities 


Neutron scattering has been used to study various vibration modes, including low-frequency collective motion in 
proteins and DNA, [2] [3] [4] [5] [6] as reviewed by Dr. P. Martel in 1992. [1] 


[1] Martel, P. (1992) Biophysical aspects of neutron scattering from vibrational modes of proteins. Prog Biophys Mol Biol, 57, 129-179. 

[2] Chou, K.C. (1983) Low-frequency vibrations of helical structures in protein molecules. Biochemical Journal, 209, 573-580. 

[3] Chou, K.C. (1985) Low-frequency motions in protein molecules: beta-sheet and beta-barrel. Biophysical Journal, 48, 289-297. 

[4] Chou, K.C, Maggiora, G.M. and Mao, B. (1989) Quasi-continuum models of twist-like and accordion-like low-frequency motions in DNA. 

Biophysical Journal, 56, 295-305. 
[5] Kuo-Chen Chou (1988) Review: Low-frequency collective motion in biomacromolecules and its biological functions. Biophysical Chemistry, 

30, 3-48. 
[6] Chou, K.C. (1989) Low-frequency resonance and cooperativity of hemoglobin. Trends in Biochemical Sciences, 14, 212. 

Neutron scattering 


External links 

• Neutron Scattering - A primer ( 
neutrons-a-primer-by-rogen-pynn.pdf) ( LANL-hosted black and white version ( 
getfile700326651.pdf)) - An introductory article written by Roger Pynn (Los Alamos National Laboratory) 

• Podcast Interview with two ILL scientists about neutron science/scattering at the ILL (http://omegataupodcast. 

ISIS neutron source 

ISIS is a pulsed neutron and muon source. It 
is situated at the Rutherford Appleton 
Laboratory on the Harwell Science and 
Innovation Campus in Oxfordshire, United 
Kingdom and is part of the Science and 
Technology Facilities Council . It uses the 
techniques muon spectroscopy and neutron 
scattering to probe the structure and 
dynamics of condensed matter on a 
microscopic scale ranging from the 
subatomic to the macromolecular. 

Hundreds of experiments are performed 
annually at ISIS by visiting researchers from 
around the world, in diverse science areas 
including physics, chemistry, materials 
engineering, earth sciences, biology and 

Neutrons and muons 

Neutrons are uncharged constituents of 
atoms and penetrate materials well, 
deflecting only from the nuclei of atoms. 
The statistical accumulation of deflected 
neutrons at different positions beyond the 
sample can be used to find the structure of a material, and the loss or gain of energy by neutrons can reveal the 
dynamic behaviour of parts of a sample, for example diffusive processes in solids. At ISIS the neutrons are created 
by accelerating 'bunches' of protons in a synchrotron, then colliding these with a heavy tantalum metal target, under a 
constant cooling load to dissipate the heat from the 160 kW proton beam. The impacts cause neutrons to spall off the 
tantalum atoms, and the neutrons are channelled through guides, or beamlines, to about 20 instruments, individually 
optimised for the study of different types of matter. The target station and most of the instruments are set in a large 
hall. Neutrons are a dangerous form of radiation, so the target and beamlines are heavily shielded with concrete. 

ISIS produces muons by colliding a fraction of the proton beam with a graphite target, producing pions which decay 
rapidly into muons, delivered in a spin-polarised beam to sample stations. 

ISIS neutron source 


Another view of the ISIS experimental hall for Target Station 1 

Science at ISIS 

ISIS is administered and operated by the 

Science and Technology Facilities Council 

(previously CCLRC). Experimental time is 

open to academic users from funding 

countries and is applied for through a 

twice-yearly 'call for proposals'. Research 

allocation, or 'beam-time', is allotted to 

applicants via a peer-review process. Users 

and their parent institutions do not pay for 

the running costs of the facility, which are as much as £11,000 per instrument per day. Their transport and living 

costs are also refunded whilst carrying out the experiment. Most users stay in Ridgeway House, a hotel near the site, 

or at Cosener's House, an STFC-run conference centre in Abingdon. Over 600 experiments by 1600 users are 

completed every year. 

A large number of support staff operate the facility, aid users, and carry out research, the control room is staffed 24 
hours a day, every day of the year. Instrument scientists oversee the running of each instrument and liaise with users, 
and other divisions provide sample environment, data analysis and computing expertise, maintain the accelerator, 
and run education programmes. 

Among the important and pioneering work carried out was the discovery of the structure of high-temperature 
superconductors and the solid phase of buckminster-fullerene. 

Construction for a second target station started in 2003, and the first neutrons were delivered to the target on 
December 14, 2007 . It will use low-energy neutrons to study soft condensed matter, biological systems, advanced 
composites and nanomaterials. To supply the extra protons for this, the accelerator is being upgraded. 

History and background of ISIS 

The source was approved in 1977 for the RAL site on the Harwell campus and recycled components from earlier UK 
science programmes including the accelerator hall which had previously been occupied by the Nimrod accelerator. 
The first beam was produced in 1984, and the facility was formally opened by the then Prime Minister Margaret 

Thatcher in October 1985 


The name ISIS is not an acronym: it refers to the Ancient Egyptian goddess and the local name for the River 
Thames. The name was selected for the official opening of the facility in 1985, prior to this it was known as the SNS, 
or Spallation Neutron Source. The name was considered appropriate as Isis was a goddess who could restore life to 
the dead, and ISIS made use of equipment previously constructed for the Nimrod and Nina accelerators . 

ISIS neutron source 


External links 


ISIS facility 

ISIS Second Target Station 

The Science and Technology Facilities Council 




[1] ISIS Second Target Station Project ( 

[2] Linacs at the Rutherford Appleton Laboratory ( 

[3] Explanation of the name of ISIS ( 




Geographical coordinates: 51°34'18"N 1°19'12"W 


A synchrotron is a particular type of cyclic 
particle accelerator in which the magnetic 
field (to turn the particles so they circulate) 
and the electric field (to accelerate the 
particles) are carefully synchronised with 
the travelling particle beam. The proton 
synchrotron was originally conceived by Sir 
Marcus Oliphant . The honour of being 
the first to publish the idea went to Vladimir 
Veksler, and the first electron synchrotron 
was constructed by Edwin McMillan. 


While a cyclotron uses a constant magnetic field and a constant-frequency applied electric field (one of these is 
varied in the synchrocyclotron), both of these fields are varied in the synchrotron. By increasing these parameters 
appropriately as the particles gain energy, their path can be held constant as they are accelerated. This allows the 
vacuum chamber for the particles to be a large thin torus. In reality it is easier to use some straight sections between 
the bending magnets and some bent sections within the magnets giving the torus the shape of a round-cornered 
polygon. A path of large effective radius may thus be constructed using simple straight and curved pipe segments, 
unlike the disc-shaped chamber of the cyclotron type devices. The shape also allows and requires the use of multiple 
magnets to bend the particle beams. Straight sections are required at spacings around a ring for both radiofrequency 
cavities, and in third generation setups space is allowed for insertion of energy extraction devices such as wigglers 
and undulators. 

The maximum energy that a cyclic accelerator can impart is typically limited by the strength of the magnetic field(s) 
and the minimum radius (maximum curvature) of the particle path. 



The interior of the Australian Synchrotron facility. Dominating the image is the storage 

ring, showing the optical diagnostic beamline at front right. In the middle of the storage 

ring is the booster synchrotron and linac 

In a cyclotron the maximum radius is 

quite limited as the particles start at the 

center and spiral outward, thus the 

entire path must be a self-supporting 

disc-shaped evacuated chamber. Since 

the radius is limited, the power of the 

machine becomes limited by the 

strength of the magnetic field. In the 

case of an ordinary electromagnet the 

field strength is limited by the 

saturation of the core (when all 

magnetic domains are aligned the field may not be further increased to any practical extent). The arrangement of the 

single pair of magnets the full width of the device also limits the economic size of the device. 

Synchrotrons overcome these limitations, using a narrow beam pipe which can be surrounded by much smaller and 
more tightly focusing magnets. The ability of this device to accelerate particles is limited by the fact that the particles 
must be charged to be accelerated at all, but charged particles under acceleration emit photons (light), thereby losing 
energy. The limiting beam energy is reached when the energy lost to the lateral acceleration required to maintain the 
beam path in a circle equals the energy added each cycle. More powerful accelerators are built by using large radius 
paths and by using more numerous and more powerful microwave cavities to accelerate the particle beam between 
corners. Lighter particles (such as electrons) lose a larger fraction of their energy when turning. Practically speaking, 
the energy of electron/positron accelerators is limited by this radiation loss, while it does not play a significant role 
in the dynamics of proton or ion accelerators. The energy of those is limited strictly by the strength of magnets and 
by the cost. 

Design and operation 

Particles are injected into the main ring at substantial energies by either a linear accelerator or by an intermediate 
synchrotron which is in turn fed by a linear accelerator. The "linac" is in turn fed by particles accelerated to 
intermediate energy by a simple high voltage power supply, typically a Cockcroft-Walton generator. 

Starting from an appropriate initial value determined by the injection velocity the magnetic field is then increased. 
The particles pass through an electrostatic accelerator driven by a high alternating voltage. At particle speeds not 
close to the speed of light the frequency of the accelerating voltage can be made roughly proportional to the current 
in the bending magnets. A finer control of the frequency is performed by a servo loop which responds to the 
detection of the passing of the traveling group of particles. At particle speeds approaching light speed the frequency 
becomes more nearly constant, while the current in the bending magnets continues to increase. The maximum energy 
that can be applied to the particles (for a given ring size and magnet count) is determined by the saturation of the 
cores of the bending magnets (the point at which increasing current does not produce additional magnetic field). One 
way to obtain additional power is to make the torus larger and add additional bending magnets. This allows the 
amount of particle redirection at saturation to be less and so the particles can be more energetic. Another means of 
obtaining higher power is to use superconducting magnets, these not being limited by core saturation. 



Large synchrotrons 

One of the early large synchrotrons, now 
retired, is the Bevatron, constructed in 1950 
at the Lawrence Berkeley Laboratory. The 
name of this proton accelerator comes from 
its power, in the range of 6.3 GeV (then 
called BeV for billion electron volts; the 
name predates the adoption of the SI prefix 
giga-). A number of heavy elements, unseen 
in the natural world, were first created with 
this machine. This site is also the location of 
one of the first large bubble chambers used 
to examine the results of the atomic 
collisions produced here. 

Another early large synchrotron is the Cosmotron built at Brookhaven National Laboratory which reached 3.3 GeV 
in 1953 


Until August 2008, the highest energy synchrotron in the world was the Tevatron, at the Fermi National Accelerator 
Laboratory, in the United States. It accelerates protons and antiprotons to slightly less than 1 TeV of kinetic energy 
and collides them together. The Large Hadron Collider (LHC), which has been built at the European Laboratory for 
High Energy Physics (CERN), has roughly seven times this energy (so proton-proton collisions occur at roughly 14 
TeV). It is housed in the 27 km tunnel which formerly housed the Large Electron Positron (LEP) collider, so it will 
maintain the claim as the largest scientific device ever built. The LHC will also accelerate heavy ions (such as lead) 
up to an energy of 1.15 PeV. 

The largest device of this type seriously proposed was the Superconducting Super Collider (SSC), which was to be 
built in the United States. This design, like others, used superconducting magnets which allow more intense 
magnetic fields to be created without the limitations of core saturation. While construction was begun, the project 
was cancelled in 1994, citing excessive budget overruns — this was due to naive cost estimation and economic 
management issues rather than any basic engineering flaws. It can also be argued that the end of the Cold War 
resulted in a change of scientific funding priorities that contributed to its ultimate cancellation. While there is still 
potential for yet more powerful proton and heavy particle cyclic accelerators, it appears that the next step up in 
electron beam energy must avoid losses due to synchrotron radiation. This will require a return to the linear 
accelerator, but with devices significantly longer than those currently in use. There is at present a major effort to 
design and build the International Linear Collider (ILC), which will consist of two opposing linear accelerators, one 
for electrons and one for positrons. These will collide at a total center of mass energy of 0.5 TeV. 

However, synchrotron radiation also has a wide range of applications (see synchrotron light) and many 2nd and 3rd 
generation synchrotrons have been built especially to harness it. The largest of those 3rd generation synchrotron light 
sources are the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, the Advanced Photon Source 
(APS) near Chicago, USA, and SPring-8 in Japan, accelerating electrons up to 6, 7 and 8 GeV, respectively. 

Synchrotrons which are useful for cutting edge research are large machines, costing tens or hundreds of millions of 
dollars to construct, and each beamline (there may be 20 to 50 at a large synchrotron) costs another two or three 
million dollars on average. These installations are mostly built by the science funding agencies of governments of 
developed countries, or by collaborations between several countries in a region, and operated as infrastructure 
facilities available to scientists from universities and research organisations throughout the country, region, or world. 
More compact models, however, have been developed, such as the Compact Light Source. 



List of installations 


Location & Country 







Advanced Photon Source (APS) 

Argonne National Laboratory, USA 





Cerdanyola del Valles near Barcelona, 





Rutherford Appleton Laboratory, UK 




Australian Synchrotron 

Melbourne, Australia 





Karlsruhe Institute of Technology, 





Campinas, Brazil 





Allaan, Jordan 



Under Design 


Lawrence Berkeley Laboratory, USA 





Advanced Light Source 

Lawrence Berkeley Laboratory, USA 





Brookhaven National Laboratory, USA 





National Synchrotron Light 

Brookhaven National Laboratory, USA 





Rutherford Appleton Laboratory, UK 




Alternating Gradient Synchrotron 

Brookhaven National Laboratory, USA 




Stanford Synchrotron Radiation 

SLAC National Accelerator Laboratory, 




Synchrotron Radiation Center 

Madison, USA 




Cornell High Energy Synchrotron 
Source (CHESS) 

Cornell University, USA 





Paris, France 




Shanghai Synchrotron Radiation 
Facility (SSRF) 

Shanghai, China 




Proton Synchrotron 

CERN, Switzerland 





Fermi National Accelerator Laboratory, 




Swiss Light Source 

Paul Scherrer Institute, Switzerland 




Large Hadron Collider (LHC) 

CERN, Switzerland 





Helmholtz-Zentrum Berlin in Berlin, 




European Synchrotron Radiation 
Facility (ESRF) 

Grenoble, France 





MAX-lab, Sweden 





MAX-lab, Sweden 





MAX-lab, Sweden 





Trieste, Italy 




Synchrotron Radiation Source 

Daresbury Laboratory, UK 








Aarhus University, Denmark 




Diamond Light Source 

Oxfordshire, UK 





DESY, Germany 





DESY, Germany 






DESY, Germany 




Canadian Light Source 

University of Saskatchewan, Canada 





RIKEN, Japan 





Tsukuba, Japan 


National Synchrotron Radiation 
Research Center 

Hsinchu Science Park, Taiwan 




Synchrotron Light Research 
Institute (SLRI) 

Nakhon Ratchasima, Thailand 




Indus 1 

Raja Ramanna Centre for Advanced 
Technology, Indore, India 



Indus 2 

Raja Ramanna Centre for Advanced 
Technology, Indore, India 





JINR, Dubna, Russia 





U-70 synchrotron 

IHEP, Protvino, Russia 




LSU, Louisiana, US 





PAL, Pohang, Korea 




Note: in the case of colliders, the quoted energy is often double what is shown here. The above table shows the 
energy of one beam but if two opposing beams collide head on, the centre of mass energy is double the beam 
energy shown. 


Life sciences: protein and large molecule crystallography 

LIGA based microfabrication 

Drug discovery and research 

"Burning" computer chip designs into metal wafers 

Studying molecule shapes and protein crystals 

Analysing chemicals to determine their composition 

Observing the reaction of living cells to drugs 

Inorganic material crystallography and microanalysis 

Fluorescence studies 

Semiconductor material analysis and structural studies 

Geological material analysis 

Medical imaging 

Proton therapy to treat some forms of cancer 

Synchrotron 212 

See also 

• List of synchrotron radiation facilities 

• Synchrotron X-ray tomographic microscopy 

• Energy amplifier 

• Superconducting Radio Frequency 


[1] Nature 407, 468 (28 September 2000) ( 
[2] The Cosmotron ( 

External links 

Canadian Light Source ( 

Australian Synchrotron ( 

Diamond UK Synchrotron ( ( 

CERN Large Hadron Collider ( 

Synchrotron Light Sources of the World ( 

A Miniature Synchrotron: ( room-size synchrotron offers 

scientists a new way to perform high-quality x-ray experiments in their own labs, Technology Review, February 

04, 2008 

Brazilian Synchrotron Light Laboratory ( 


Podcast interview ( with a 

scientist at the European Synchrotron Radiation Facility 


Medical Diagnostics in Cardiology 





Activity sectors Medicine 


Doctor, Medical Specialist 

Education required 
Fields of employment 

Doctor of Medicine 
Hospitals, Clinics 

Cardiology (from Greek KapoLa, kardid, "heart"; and -XoyLa, -logia) is a medical specialty dealing with disorders of 
the heart. The field includes diagnosis and treatment of congenital heart defects, coronary artery disease, heart 
failure, valvular heart disease and electrophysiology. Physicians specializing in this field of medicine are called 
cardiologists. Cardiologists should not be confused with cardiac surgeons, cardiothoracic and cardiovascular, who 
are surgeons who perform cardiac surgery via sternotomy - open operative procedures on the heart and great vessels. 

The term cardiology is derived from the Greek word KagSid (transliterated as kardia and meaning heart or inner 

The Cardiac Muscle 

Cardiac pacemaker (Electrical system of the heart) 

• Electrical conduction system of the heart 
• Action potential 

• Ventricular action potential 

• Sinoatrial node 

• Atrioventricular node 

• Bundle of His 

• Purkinje fibers 

• Heart Attack (Myocardial Infarction) 



Basic cardiac physiology 

• Systole 

• Diastole 

• Heart sounds 

• Preload 

• Afterload 

• Kussmaul's signature 

Disorders of the coronary circulation 

• Atherosclerosis 

• Restenosis 

• Coronary heart disease (Ischaemic heart disease, Coronary artery 

• Acute coronary syndrome 

• Angina 

• Myocardial infarction (Heart attack) 

Cardiac arrest 

• Ventricular fibrillation 

• Pulseless ventricular tachycardia 

• Pulseless electrical activity 

• Asystole 

• Sudden cardiac death (The abrupt reduction or cessation of blood flow to the myocardium, leading to death) 

Treatment of cardiac arrest 

• Cardiopulmonary resuscitation (CPR) 

A diagram of a heart with an ECG indicator; 
diagrams like this are used in Cardiology. 

Disorders of the myocardium (muscle of the heart) 

• Cardiomyopathy 

• Ischemic cardiomyopathy 

• Nonischemic cardiomyopathy 

• Amyloid cardiomyopathy 

• Hypertrophic cardiomyopathy (HCM) 

• Hypertrophic obstructive cardiomyopathy (HOCM) (Idiopathic hypertrophic subaortic stenosis (IHSS)) 

• hypertrophic cardiomyopathy 

• Dilated cardiomyopathy 

• Alcoholic cardiomyopathy 

• Tachycardia induced cardiomyopathy 

• Takotsubo cardiomyopathy (Transient apical ballooning, stress-induced cardiomyopathy) 

• Arrhythmogenic right ventricular dysplasia (Arrhythmogenic right ventricular cardiomyopathy) 

• Restrictive cardiomyopathy 

• Congestive heart failure 

• Cor pulmonale 

Cardiology 215 

• Ventricular hypertrophy 

• Left ventricular hypertrophy 

• Right ventricular hypertrophy 

• Primary tumors of the heart 

• Myxoma 

• Myocardial rupture 

Disorders of the pericardium (outer lining of the heart) 

• Pericarditis 

• Pericardial tamponade 

• Constrictive pericarditis 

Disorders of the heart valves 

• Aortic valve disorders and treatments: 

• Aortic insufficiency 

• Aortic stenosis 

• Aortic valve replacement 

• Aortic valve repair 

• Aortic valvuloplasty 

• Mitral valve disorders and treatments: 

• Mitral valve prolapse 

• Mitral regurgitation 

• Mitral stenosis 

• Mitral valve replacement 

• Mitral valve repair 

• Mitral valvuloplasty 

• Pulmonary valve disorders: 

• Congenital pulmonic stenosis 

• Tricuspid valve disorders 

• Wolff-Parkinson-White syndrome (WPW syndrome) 

Congenital heart disease 

Atrial septal defect 

Ventricular septal defect 

Patent ductus arteriosus 

Bicuspid aortic valve 

Tetralogy of Fallot 

Transposition of the great vessels (TGV) 

Hypoplastic left heart syndrome 

Truncus Arteriosus 

Cardiology 216 

Diseases of blood vessels (Vascular diseases) 

• Vasculitis 

• Atherosclerosis 

• Aneurysm 

• Varicose veins 

• Economy class syndrome 

• Diseases of the aorta 

• Coarctation of the aorta 

• Aortic dissection 

• Aortic aneurysm 

• Diseases of the carotid arteries 

• Carotid artery disease 

• Carotid artery dissection 

Procedures done for coronary artery disease 

• Percutaneous coronary intervention 

• Atherectomy 

• Angioplasty (PTCA) 

• Stenting 

• Coronary artery bypass surgery (CABG) 

• Enhanced external counterpulsation (EECP) 

Devices used in cardiology 

• Stethoscope 

• Devices used to maintain normal electrical rhythm 

• Pacemaker 

• Defibrillator 

• Automated external defibrillator 

• Implantable cardioverter-defibrillator 

• Devices used to maintain blood pressure 

• Artificial heart 

• Heart-lung machine 

• Intra-aortic balloon pump 

• Ventricular assist device 

Cardiology 217 

Diagnostic tests and procedures 

• Blood tests 

• Echocardiogram 

• Cardiovascular Magnetic Resonance 

• Cardiac stress test 

• Auscultation (Listening with the Stethoscope) 

Electrocardiogram (ECG or EKG) 

• QT interval 

• Osborn wave 

• Ambulatory Holter monitor 

• Electrophysiology study 

• Programmed electrical stimulation 

• Sphygmomanometer (Blood pressure cuff) 

• Cardiac enzymes 

• Coronary catheterization 

• Myocardial Fractional Flow Reserve (FFRmyo) 

• IVUS (Intravascular UltraSound) 

• OCT (Optical Coherence Tomography) 

See also 

• Interventional cardiology 

• Clinical cardiac electrophysiology 

• American Heart Association 

• National Heart Foundation of Australia 

• List of cardiac pharmaceutical agents 

External links 

• U.S. National Inst: 

• American College of Cardiology 

U.S. National Institute of Health (NIH) : Heart and Circulation [1] 



Electrical conduction system of the heart 


Electrical conduction system of the heart 

Electrical conduction system of the 

Right bundle 

Isolated conduction system of the heart 

Heart; conduction system 

1. Sinoatrial node 

2. Atrioventricular 

3. Bundle of His 

4. Left bundle branch 

5. left posterior fascicle 


6. left-anterior fascicle 

7. Left ventricle 

8. Ventricular septum 

9. Right ventricle 

10. Right bundle branch 

sy sterna conducens cordis 

The normal electrical conduction of the heart allows electrical 
propagation to be transmitted from the Sinoatrial Node through both 
atria and forward to the Atrioventricular Node. Normal/baseline 
physiology allows further propagation from the AV node to the 
Ventricle or Purkinje Fibers and respective bundle branches and 
subdivisions/fascicles. Both the SA and AV nodes stimulate the 
Myocardium. Time ordered stimulation of the myocardium allows 
efficient contraction of all four chambers of the heart, thereby allowing 
selective blood perfusion through both the lungs and systemic 

Electrochemical mechanism 

The neurons that innervate cardiac muscle hold some similarities to 
those of skeletal muscle as well as other important differences. These 
neurons are uniquely subject to influence by the sympathetic part of the 
autonomic nervous system. 

Principle of ECG formation. Note that the red 

lines represent the depolarization wave, not 


Electrical conduction system of the heart 219 

Like a neuron, a given myocardial cell has a negative membrane potential when at rest. Stimulation above a 
threshold value induces the opening of voltage-gated ion channels and a flood of cations into the cell. The positively 
charged ions entering the cell cause the depolarization characteristic of an action potential. After depolarization, 
there's a brief repolarization that takes place with the eflux of potassium through fast acting potassium channels. Like 
skeletal muscle, depolarization causes the opening of voltage-gated calcium channels - meanwhile potassium 
channels have closed - and are followed by a titrated release of Ca from the t-tubules. This influx of calcium causes 
calcium-induced calcium release from the sarcoplasmic reticulum, and free Ca causes muscle contraction. After a 
delay, slow acting Potassium channels reopen and the resulting flow of K out of the cell causes repolarization to the 
resting state. 

Note that there are important physiological differences between nodal cells and ventricular cells; the specific 
differences in ion channels and mechanisms of polarization give rise to unique properties of SA node cells, most 
importantly the spontaneous depolarizations necessary for the SA node's pacemaker activity. 

Conduction pathway 

Signals arising in the SA node (and propagating to the left atrium via Bachmann's bundle) stimulate the atria to 
contract. In parallel, action potentials travel to the AV node via internodal pathways. After a delay, the stimulus is 
conducted through the bundle of His to the bundle branches and then to the purkinje fibers and the endocardium at 
the apex of the heart, then finally to the ventricular myocardium. 

Microscopically, the wave of depolarization propagates to adjacent cells via gap junctions located on the intercalated 
disk. The heart is afunctional syncytium (not to be confused with a true "syncytium" in which all the cells are fused 
together, sharing the same plasma membrane as in skeletal muscle). In a functional syncytium, electrical impulses 
propagate freely between cells in every direction, so that the myocardium functions as a single contractile unit. This 
property allows rapid, synchronous depolarization of the myocardium. While normally advantageous, this property 
can be detrimental as it potentially allows the propagation of incorrect electrical signals. These gap junctions can 
close to isolate damaged or dying tissue, as in a myocardial infarction. 

Depolarization and the ECG 

SA node: P wave 

Under normal conditions, electrical activity is 
spontaneously generated by the SA node, the 
physiological pacemaker. This electrical impulse 
is propagated throughout the right atrium, and 
through Bachmann's bundle to the left atrium, 
stimulating the myocardium of both atria to 
contract. The conduction of the electrical impulse 
throughout the left and right atria is seen on the 
ECG as the P wave. 




The ECG complex. P=P wave, PR=PR interval, QRS=QRS complex, 

As the electrical activity is spreading throughout QT=QT interval - ST=ST se § ment ' T=T wave 

the atria, it travels via specialized pathways, 

known as internodal tracts, from the SA node to the AV node. 

Electrical conduction system of the heart 


AV node/Bundles: PR interval 

The AV node functions as a critical delay in the conduction system. 
Without this delay, the atria and ventricles would contract at the same 
time, and blood wouldn't flow effectively from the atria to the 
ventricles. The delay in the AV node forms much of the PR segment 
on the ECG. Part of atrial repolarization can be represented by PR 

The distal portion of the AV node is known as the bundle of His. The 

bundle of His splits into two branches in the interventricular septum, 

the left bundle branch and the right bundle branch. The left bundle 

branch activates the left ventricle, while the right bundle branch 

activates the right ventricle. The left bundle branch is short, splitting 

into the left anterior fascicle and the left posterior fascicle. The left 

posterior fascicle is relatively short and broad, with dual blood supply, 

making it particularly resistant to ischemic damage. The left posterior 

fascicle transmits impulses to the papillary muscles, leading to mitral 

valve closure. As the left posterior fascicle is shorter and broader than the right, impulses reach the papillary muscles 

just prior to depolarization, and therefore contraction, of the left ventricle myocardium. This allows pre-tensioning of 

the chordae tendinae, increasing the resistance to flow through the mitral valve during left ventricular contraction. 

Purkinje fibers/ventricular myocardium: QRS complex 

The two bundle branches taper out to produce numerous purkinje fibers, which stimulate individual groups of 
myocardial cells to contract. 

The spread of electrical activity (depolarization) through the ventricular myocardium produces the QRS complex on 
the ECG. 

Ventricular repolarization: T wave 

The last event of the cycle is the repolarization of the ventricles. The transthoracic ally measured PQRS portion of an 
electrocardiogram is chiefly influenced by the sympathetic nervous system. The T (and occasionally U) waves are 
chiefly influenced by the parasympathetic nervous system guided by integrated brainstem control from the vagus 
nerve and the thoracic spinal accessory ganglia. 


An impulse (action potential) that originates from the SA node at a relative rate of 60 - lOObpm is known as normal 
sinus rhythm. If SA nodal impulses occur at a rate less than 60bpm, the heart rhythm is known as sinus bradycardia. 
If SA nodal impulses occur at a rate exceeding lOObpm, the consequent rapid heart rate is sinus tachycardia. These 
conditions are not necessarily bad symptoms, however. Trained athletes, for example, usually show heart rates 
slower than 60bpm when not exercising. If the SA node fails to initialize, the AV junction can take over as the main 
pacemaker of the heart. The AV junction "surrounds" the AV node and has a regular rate of 40 to 60bpm. These 
"junctional" rhythms are characterized by a missing or inverted P-Wave. If both the SA node and the AV junction 
fail to initialize the electrical impulse, the ventricles can fire the electrical impulses themselves at a rate of 20 to 
40bpm and will have a QRS complex of greater than 120ms. 

Electrical conduction system of the heart 221 

Embryologic evidence 

Embryologic evidence of generation of the cardiac conduction system illuminates the respective roles of this 
specialized set of cells. Innervation of the heart begins with a brain only centered parasympathetic cholinergic first 
order. It is then followed by rapid growth of a second order sympathetic adrenergic system arising from the 
formation of the Thoracic Spinal Ganglia. The third order of electrical influence of the heart is derived from the 
Vagus Nerve as the other peripheral organs form. 

See also 

• Bradycardia 

• Cardiac pacemaker 

• Electrocardiogram (ECG) 

• Tachycardia 

• Circle map — simplified mathematical model of the beating heart. 


[1] "Innervation of the heart" ( Human Embryology: Organogenesis: 
Functional development of the heart. . 

External links 

• Conduction system of the heart ( 
jspzQzpgzEzzSzppdocszSzuszSzcnszSzcontentzSzadamzSzimagepageszSzl8052zPzhtm) - Merck Source 

Coronary disease 222 

Coronary disease 

Coronary disease 

Classification and external resources 




D003327 [1] 

Coronary disease (or coronary heart disease) refers to the failure of coronary circulation to supply adequate 
circulation to cardiac muscle and surrounding tissue. It is already the most common form of disease affecting the 
heart and an important cause of premature death in Europe, the Baltic states, Russia, North and South America, 
Australia and New Zealand. It has been predicted that all regions of the world will be affected by 2020. 

It is most commonly equated with atherosclerotic coronary artery disease, but coronary disease can be due to other 
causes, such as coronary vasospasm. It is possible for the stenosis to be caused by the spasm. 

Signs and symptoms 

Coronary heart disease may be asymptomatic. If not, symptoms can include: 

• Chest heaviness 

• Dyspnea 

• Fatigue 

• Chest pain 

• Angina' 

Myocardial infarction is a complication of coronary disease. It is sometimes classified as a symptom. 


Coronary artery disease, the most common type of coronary disease, which has no clear etiology, has many risk 
factors, including smoking, radiotherapy to the chest, hypertension, diabetes, and hyperlipidemia. 

Also, having a Type A behavior pattern, a group of personality characteristics including time urgency and 


competitiveness, is linked to an increased risk of coronary disease. 


Lifestyle changes 

Lifestyle changes that may be useful in coronary disease include. 

• Weight control 

• Smoking cessation 

• Exercise 

• Healthy diet Over the past 50 years, doctors have recommended the reduction of animal based foods and an 
increase in plant based foods. However, many doctors have argued that it is an excess of carbohydrates, not 
animal fats , which cause coronary heart disease and have recommended eating more animal fat. 

Coronary disease 223 

Medications to treat coronary disease 

• Cholesterol lowering medications, such as statins, are useful to decrease the amount of "bad" (LDL) cholesterol. 

• Nitroglycerin 

• ACE inhibitors, which treat hypertension and may lower the risk of recurrent myocardial infarction 

• Calcium channel blockers and/or beta-blockers 


• Aspirin 

Surgical intervention 

• Angioplasty 

• Stents (bare-metal or drug-eluding) 


• Coronary artery bypass 

• Heart Transplant 



[2] Boon NA, Colledge NR, Walker BR and Hunter JAA (2006). Davidson's Principles & Practice of Medicine, 20th Edition. Churchill 

[3] Williams MJ, Restieaux NJ, Low CJ (February 1998). "Myocardial infarction in young people with normal coronary arteries" (http://heart. Heartl9 (2): 191-4. PMID 9538315. PMC 1728590. . 
[4] Rezkalla SH, Kloner RA (October 2007). "Cocaine-induced acute myocardial infarction" ( 

pmidlookup?view=long&pmid=18056026). ClinMedRes 5 (3): 172-6. doi:10.3121/cmr.2007.759. PMID 18056026. PMC 2111405. . 
[5] https://health. google. com/health/ref/Coronary+heart+disease 

[8] McCann, 2001, the precocity-longevity hypothesis: earlier peaks in career achievement predict shorter lives. Personality & Social psychology 

bulletin, 27, 1429-1439; Rhodewalt & Smith, 1991, current issues in Type A behaviour, coronary proneness, and coronary heart disease. In 

C.R. Snyder & D.R.Forsyth (Eds.), Handbook of social and clinical psychology (pp. 197-220) New York: Pergamon 
[10] morrison 1 m "diet in coronary artherosclerosis" JAMA 173; 1960; p884-888 

[II] Mente A, de Koning L, Shannon HS, Anand SS "A systematic review of the evidence supporting a causal link between dietary factors and 
coronary heart disease." (, Population Health Research Institute, Hamilton, Canada, 
April 13, 2009. "pdf" ( 

[12] Siri-Tarino PW, Sun Q, Hu FB, Krauss RM "Meta-analysis of prospective cohort studies evaluating the association of saturated fat with 

cardiovascular disease." (, Children's Hospital, Oakland Research Institute Oakland, 

CA, USA, Am J Clin Nutr. 2010 Mar;91(3):535-46. Epub 2010 Jan 13. 
[13] "Fats and Fatty Acids in Human Nutrition" ( asp?Aktion=ShowEachType& 

ProduktNr=251867), Joint FAO/WHO Expert Consultation, Geneva, November 2008, "Free Online Access" ( 






Classification and external resources 





Initial lesion 

■ macrophage infiltration 

1 i 


: atty streak 






Intermediate lesion 


- intracellular lipid accumulation 




: ibro atheroma 


Complicated lesion 


The progression of atherosclerosis (size exaggerated; see text) 






440 [2] , 414.0 [3] 









Atherosclerosis (also known as arteriosclerotic vascular disease or ASVD) is a condition in which an artery wall 
thickens as the result of a build-up of fatty materials such as cholesterol. It is a syndrome affecting arterial blood 
vessels, a chronic inflammatory response in the walls of arteries, in large part due to the accumulation of 
macrophage white blood cells and promoted by low-density lipoproteins (plasma proteins that carry cholesterol and 
triglycerides) without adequate removal of fats and cholesterol from the macrophages by functional high density 
lipoproteins (HDL), (see apoA-1 Milano). It is commonly referred to as a hardening or furring of the arteries. It is 


caused by the formation of multiple plaques within the arteries. 
The atheromatous plaque is divided into three distinct components: 

1. The atheroma ("lump of gruel," from rxSipa, athera, gruel in Greek), which is the nodular accumulation of a soft, 
flaky, yellowish material at the center of large plaques, composed of macrophages nearest the lumen of the artery 

2. Underlying areas of cholesterol crystals 

3. Calcification at the outer base of older/more advanced lesions. 

The following terms are similar, yet distinct, in both spelling and meaning, and can be easily confused: 
arteriosclerosis, arteriolosclerosis, and atherosclerosis. Arteriosclerosis is a general term describing any hardening 
(and loss of elasticity) of medium or large arteries (from the Greek arteria, meaning artery, and sclerosis, meaning 
hardening); arteriolosclerosis is any hardening (and loss of elasticity) of arterioles (small arteries); atherosclerosis is 

Atherosclerosis 225 

a hardening of an artery specifically due to an atheromatous plaque. The term atherogenic is used for substances or 
processes that cause atherosclerosis. 

Atherosclerosis, though typically asymptomatic for decades, eventually produces two main problems: First, the 
atheromatous plaques, though long compensated for by artery enlargement (see IMT), eventually lead to plaque 
ruptures and clots inside the artery lumen over the ruptures. The clots heal and usually shrink but leave behind 
stenosis (narrowing) of the artery (both locally and in smaller downstream branches), or worse, complete closure, 
and, therefore, an insufficient blood supply to the tissues and organ it feeds. Second, if the compensating artery 
enlargement process is excessive, then a net aneurysm results. 

These complications of advanced atherosclerosis are chronic, slowly progressive and cumulative. Most commonly, 
soft plaque suddenly ruptures (see vulnerable plaque), causing the formation of a thrombus that will rapidly slow or 
stop blood flow, leading to death of the tissues fed by the artery in approximately 5 minutes. This catastrophic event 
is called an infarction. One of the most common recognized scenarios is called coronary thrombosis of a coronary 
artery, causing myocardial infarction (a heart attack). Even worse is the same process in an artery to the brain, 
commonly called stroke. Another common scenario in very advanced disease is claudication from insufficient blood 
supply to the legs, typically due to a combination of both stenosis and aneurysmal segments narrowed with clots. 
Since atherosclerosis is a body-wide process, similar events occur also in the arteries to the brain, intestines, kidneys, 
legs, etc. Many infarctions involve only very small amounts of tissue and are termed clinically silent, because the 
person having the infarction does not notice the problem, does not seek medical help or when they do, physicians do 
not recognize what has happened. 

Signs and symptoms 

Atherosclerosis typically begins in early adolescence, and is usually found in most major arteries, yet is 
asymptomatic and not detected by most diagnostic methods during life. Atheroma in arm, or more often in leg 
arteries, which produces decreased blood flow is called peripheral artery occlusive disease (PAOD). 

According to United States data for the year 2004, for about 65% of men and 47% of women, the first symptom of 
atherosclerotic cardiovascular disease is heart attack or sudden cardiac death (death within one hour of onset of the 

Most artery flow disrupting events occur at locations with less than 50% lumen narrowing (-20% stenosis is 
average). The illustration above, like most illustrations of arterial disease, overemphasizes lumen narrowing, as 
opposed to compensatory external diameter enlargement (at least within smaller arteries, e.g., heart arteries) typical 
of the atherosclerosis process as it progresses (see Glagov the ASTEROID trial, and the IVUS photographs on 
page 8, as examples for a more accurate understanding). The relative geometry error within the illustration is 
common to many older illustrations, an error slowly being more commonly recognized within the last decade. 

Cardiac stress testing, traditionally the most commonly performed non-invasive testing method for blood flow 
limitations, in general, detects only lumen narrowing of -75% or greater, although some physicians claim that 
nuclear stress methods can detect as little as 50%. 

==Causes== (how it is formed) Atherosclerosis develops from low-density lipoprotein molecules (LDL) becoming 
oxidized (ldl-ox) by free radicals, particularly reactive oxygen species (ROS). When oxidized LDL comes in contact 
with an artery wall, a series of reactions occur to repair the damage to the artery wall caused by oxidized LDL. The 
LDL molecule is globular shaped with a hollow core to carry cholesterol throughout the body. Cholesterol can move 
in the bloodstream only by being transported by lipoproteins. 

The body's immune system responds to the damage to the artery wall caused by oxidized LDL by sending 
specialized white blood cells (macrophages and T-lymphocytes) to absorb the oxidized-LDL forming specialized 
foam cells. These white blood cells are not able to process the oxidized-LDL, and ultimately grow then rupture, 
depositing a greater amount of oxidized cholesterol into the artery wall. This triggers more white blood cells, 

Atherosclerosis 226 

continuing the cycle. 

Eventually, the artery becomes inflamed. The cholesterol plaque causes the muscle cells to enlarge and form a hard 
cover over the affected area. This hard cover is what causes a narrowing of the artery, reduces the blood flow and 
increases blood pressure. 

Some researchers believe that atherosclerosis may be caused by an infection of the vascular smooth muscle cells; 

chickens, for example, develop atherosclerosis when infected with the Marek's disease herpesvirus. Herpesvirus 

infection of arterial smooth muscle cells has been shown to cause cholesteryl ester (CE) accumulation. 

Cholesteryl ester accumulation is associated with atherosclerosis. 


Also, cytomegalovirus (CMV) infection is associated with cardiovascular diseases. 

Risk factors 


Various anatomic, physiological and behavioral risk factors for atherosclerosis are known. These can be divided 
into various categories: congenital vs acquired, modifiable or not, classical or non-classical. The points labelled '+' in 
the following list form the core components of metabolic syndrome. 

Risks multiply, with two factors increasing the risk of atherosclerosis fourfold. Hyperlipidemia, hypertension and 
cigarette smoking together increases the risk seven times. 


• Diabetes or Impaired glucose tolerance (IGT) + 

• Dyslipoproteinemia (unhealthy patterns of serum proteins carrying fats & cholesterol): + 

• High serum concentration of low-density lipoprotein (LDL, "bad if elevated concentrations and small"), and / 
or very low density lipoprotein (VLDL) particles, i.e., "lipoprotein subclass analysis" 

• Low serum concentration of functioning high density lipoprotein (HDL "protective if large and high enough" 
particles), i.e., "lipoprotein subclass analysis" 

• An LDL:HDL ratio greater than 3:1 

• Tobacco smoking, increases risk by 200% after several pack years 

• Having hypertension +, on its own increasing risk by 60% 

• Elevated serum C-reactive protein concentrations 


• Advanced age 

• Male sex [15] 

• Having close relatives who have had some complication of atherosclerosis (e.g. coronary heart disease or 

• Genetic abnormalities, e.g. familial hypercholesterolemia 

Lesser or uncertain 

The following factors are of relatively lesser importance, are uncertain or unquantified: 

Obesity (in particular central obesity, also referred to as abdominal or male-type obesity) + 

A sedentary lifestyle 

Postmenopausal estrogen deficiency 

High carbohydrate intake 

Intake of trans fat 

Elevated serum levels of triglycerides + 

Elevated serum levels of homocysteine 

Elevated serum levels of uric acid (also responsible for gout) 

Elevated serum fibrinogen concentrations 

Elevated serum lipoprotein(a) concentrations 

Atherosclerosis 227 

• Chronic systemic inflammation as reflected by upper normal WBC concentrations, elevated hs-CRP and many 


other blood chemistry markers, most only research level at present, not clinically done. 

• Stress or symptoms of clinical depression 

• Hyperthyroidism (an over-active thyroid) 

n si 

• Elevated serum insulin levels + 


• Short sleep duration 

• Chlamydia pneumoniae infection 

Dietary (diet-meaning what you are eating) 

The relation between dietary fat and atherosclerosis is a contentious field. The USDA, in its food pyramid, promotes 
a low-fat diet, based largely on its view that fat in the diet is atherogenic. The American Heart Association, the 
American Diabetes Association and the National Cholesterol Education Program make similar recommendations. In 
contrast, Prof Walter Willett (Harvard School of Public Health, PI of the second Nurses' Health Study) recommends 
much higher levels, especially of monounsaturated and polyunsaturated fat. Writing in Science, Gary Taubes 
detailed that political considerations played into the recommendations of government bodies. These differing 
views reach a consensus, though, against consumption of trans fats. 

The role of dietary oxidized fats / lipid peroxidation (rancid fats) in humans is not clear. Laboratory animals fed 

rancid fats develop atherosclerosis. Rats fed DHA-containing oils experienced marked disruptions to their 

antioxidant systems, as well as accumulated significant amounts of peroxide in their blood, livers and kidneys. In 

another study, rabbits fed atherogenic diets containing various oils were found to undergo the greatest amount of 

oxidative susceptibility of LDL via polyunsaturated oils. In a study involving rabbits fed heated soybean oil, 

"grossly induced atherosclerosis and marked liver damage were histologically and clinically demonstrated.' 

Rancid fats and oils taste very bad even in small amounts; people avoid eating them. It is very difficult to measure 

or estimate the actual human consumption of these substances. In addition, the majority of oils consumed in the 

United States are refined, bleached, deodorized and degummed by manufacturers. The resultant oils are colorless, 

odorless, tasteless and have a longer shelf life than their unrefined counterparts. This extensive processing serves 

to make peroxidated, rancid oils much more elusive to detection via the various human senses than the unprocessed 



Atherogenesis is the developmental process of atheromatous plaques. It is characterized by a remodeling of arteries 
involving the concomitant accumulation of fatty substances called plaques. One recent theory suggests that, for 
unknown reasons, leukocytes, such as monocytes or basophils, begin to attack the endothelium of the artery lumen in 
cardiac muscle. The ensuing inflammation leads to formation of atheromatous plaques in the arterial tunica intima, a 
region of the vessel wall located between the endothelium and the tunica media. The bulk of these lesions is made of 
excess fat, collagen, and elastin. At first, as the plaques grow, only wall thickening occurs without any narrowing, 
stenosis of the artery opening, called the lumen; stenosis is a late event, which may never occur and is often the 
result of repeated plaque rupture and healing responses, not just the atherosclerosis process by itself. 




The first step of atherogenesis is the development of so called "fatty 
streaks," which are small sub-endothelial deposits of monocyte-derived 
macrophages. The primary documented driver of this process is 
oxidized Lipoprotein particles within the wall, beneath the endothelial 
cells, though upper normal or elevated concentrations of blood glucose 
also plays a major role and not all factors are fully understood. Fatty 
streaks may appear and disappear. 

Low Density Lipoprotein particles in blood plasma, when they invade 
the endothelium and become oxidized creates a risk for cardiovascular 
disease. A complex set of biochemical reactions regulates the oxidation 
of LDL, chiefly stimulated by presence of enzymes, e.g. Lp-LpA2 and 
free radicals in the endothelium or blood vessel lining. 

Micrograph of an artery that supplies the heart 

with significant atherosclerosis and marked 

luminal narrowing. Tissue has been stained using 

Masson's trichrome. 

The initial damage to the blood vessel wall results in a "call for help," 
an inflammatory response. Monocytes (a type of white blood cell) 

enter the artery wall from the bloodstream, with platelets adhering to the area of insult. This may be promoted by 
redox signaling induction of factors such as VCAM-1, which recruit circulating monocytes. The monocytes 
differentiate macrophages, which ingest oxidized LDL, slowly turning into large "foam cells" — so-described 
because of their changed appearance resulting from the numerous internal cytoplasmic vesicles and resulting high 
lipid content. Under the microscope, the lesion now appears as a fatty streak. Foam cells eventually die, and further 
propagate the inflammatory process. There is also smooth muscle proliferation and migration from tunica media to 
intima responding to cytokines secreted by damaged endothelial cells. This would cause the formation of a fibrous 
capsule covering the fatty streak. 

Calcification and lipids 

Intracellular microcalcifications form within vascular smooth muscle cells of the surrounding muscular layer, 
specifically in the muscle cells adjacent to the atheromas. In time, as cells die, this leads to extracellular calcium 
deposits between the muscular wall and outer portion of the atheromatous plaques. A similar form of an intramural 
calcification, presenting the picture of an early phase of arteriosclerosis, appears to be induced by a number of drugs 
that have an antiproliferative mechanism of action (Rainer Liedtke 2008). 

Cholesterol is delivered into the vessel wall by cholesterol-containing low-density lipoprotein (LDL) particles. To 
attract and stimulate macrophages, the cholesterol must be released from the LDL particles and oxidized, a key step 
in the ongoing inflammatory process. The process is worsened if there is insufficient high-density lipoprotein (HDL), 
the lipoprotein particle that removes cholesterol from tissues and carries it back to the liver. 

The foam cells and platelets encourage the migration and proliferation of smooth muscle cells, which in turn ingest 
lipids, become replaced by collagen and transform into foam cells themselves. A protective fibrous cap normally 
forms between the fatty deposits and the artery lining (the intima). 

These capped fatty deposits (now called 'atheromas') produce enzymes that cause the artery to enlarge over time. As 
long as the artery enlarges sufficiently to compensate for the extra thickness of the atheroma, then no narrowing 
("stenosis") of the opening ("lumen") occurs. The artery becomes expanded with an egg-shaped cross-section, still 
with a circular opening. If the enlargement is beyond proportion to the atheroma thickness, then an aneurysm is 





Visible features 

Although arteries are not typically studied microscopically, 


two plaque types can be distinguished: 

1 . The fibro-lipid (fibro-fatty) plaque is characterized by 
an accumulation of lipid-laden cells underneath the 
intima of the arteries, typically without narrowing the 
lumen due to compensatory expansion of the bounding 
muscular layer of the artery wall. Beneath the 
endothelium there is a "fibrous cap" covering the 
atheromatous "core" of the plaque. The core consists of 
lipid-laden cells (macrophages and smooth muscle cells) 
with elevated tissue cholesterol and cholesterol ester 
content, fibrin, proteoglycans, collagen, elastin, and 
cellular debris. In advanced plaques, the central core of 
the plaque usually contains extracellular cholesterol 
deposits (released from dead cells), which form areas of 
cholesterol crystals with empty, needle-like clefts. At the 
periphery of the plaque are younger "foamy" cells and 
capillaries. These plaques usually produce the most 
damage to the individual when they rupture. 

2. The fibrous plaque is also localized under the intima, 
within the wall of the artery resulting in thickening and 
expansion of the wall and, sometimes, spotty localized 
narrowing of the lumen with some atrophy of the 

muscular layer. The fibrous plaque contains collagen fibers (eosinophilic), precipitates of calcium 
(hematoxylinophilic) and, rarely, lipid-laden cells. 

In effect, the muscular portion of the artery wall forms small aneurysms just large enough to hold the atheroma that 
are present. The muscular portion of artery walls usually remain strong, even after they have remodeled to 
compensate for the atheromatous plaques. 

However, atheromas within the vessel wall are soft and fragile with little elasticity. Arteries constantly expand and 
contract with each heartbeat, i.e., the pulse. In addition, the calcification deposits between the outer portion of the 
atheroma and the muscular wall, as they progress, lead to a loss of elasticity and stiffening of the artery as a whole. 

The calcification deposits, after they have become sufficiently advanced, are partially visible on coronary artery 
computed tomography or electron beam tomography (EBT) as rings of increased radiographic density, forming halos 
around the outer edges of the atheromatous plaques, within the artery wall. On CT, >130 units on the Hounsfield 
scale (some argue for 90 units) has been the radiographic density usually accepted as clearly representing tissue 
calcification within arteries. These deposits demonstrate unequivocal evidence of the disease, relatively advanced, 
even though the lumen of the artery is often still normal by angiographic or intravascular ultrasound. 

In days gone by the lateral chest x-ray (demonstrating greater opacity in the aortic arch and descending aorta than the 
thoracic spine) gave an indication to the degree of calcified plaque burden a patient had. This has been known as 
Piper's sign and can often be seen in elderly persons particularly those with concomitant osteoporosis. 

Severe atherosclerosis of the aorta. Autopsy specimen. 



Rupture and stenosis 

Although the disease process tends to be slowly progressive over decades, it usually remains asymptomatic until an 
atheroma ulcerates, which leads to immediate blood clotting at the site of atheroma ulcer. This triggers a cascade of 
events that leads to clot enlargement, which may quickly obstruct the flow of blood. A complete blockage leads to 
ischemia of the myocardial (heart) muscle and damage. This process is the myocardial infarction or "heart attack." 

If the heart attack is not fatal, fibrous organization of the clot within the lumen ensues, covering the rupture but also 
producing stenosis or closure of the lumen, or over time and after repeated ruptures, resulting in a persistent, usually 
localized stenosis or blockage of the artery lumen. Stenoses can be slowly progressive, whereas plaque ulceration is 
a sudden event that occurs specifically in atheromas with thinner/weaker fibrous caps that have become "unstable." 

Repeated plaque ruptures, ones not resulting in total lumen closure, combined with the clot patch over the rupture 
and healing response to stabilize the clot, is the process that produces most stenoses over time. The stenotic areas 
tend to become more stable, despite increased flow velocities at these narrowings. Most major blood-flow-stopping 
events occur at large plaques, which, prior to their rupture, produced very little if any stenosis. 

From clinical trials, 20% is the average stenosis at plaques that subsequently rupture with resulting complete artery 
closure. Most severe clinical events do not occur at plaques that produce high-grade stenosis. From clinical trials, 
only 14% of heart attacks occur from artery closure at plaques producing a 75% or greater stenosis prior to the vessel 

If the fibrous cap separating a soft atheroma from the bloodstream within the artery ruptures, tissue fragments are 
exposed and released, and blood enters the atheroma within the wall and sometimes results in a sudden expansion of 
the atheroma size. Tissue fragments are very clot-promoting, containing collagen and tissue factor; they activate 
platelets and activate the system of coagulation. The result is the formation of a thrombus (blood clot) overlying the 
atheroma, which obstructs blood flow acutely. With the obstruction of blood flow, downstream tissues are starved of 
oxygen and nutrients. If this is the myocardium (heart muscle), angina (cardiac chest pain) or myocardial infarction 
(heart attack) develops. 


Areas of severe narrowing, stenosis, detectable by angiography, and to 
a lesser extent "stress testing" have long been the focus of human 
diagnostic techniques for cardiovascular disease, in general. However, 
these methods focus on detecting only severe narrowing, not the 
underlying atherosclerosis disease. As demonstrated by human clinical 
studies, most severe events occur in locations with heavy plaque, yet 
little or no lumen narrowing present before debilitating events 
suddenly occur. Plaque rupture can lead to artery lumen occlusion 
within seconds to minutes, and potential permanent debility and 
sometimes sudden death. 

Microphotography of arterial wall with calcified 

(violet colour) atherosclerotic plaque 

(haematoxillin & eosin stain) 

Plaques that have ruptured are called complicated plaques. The lipid 
matrix breaks through the thinning collagen gap and when the lipids 
come in contact with the blood, clotting occurs. After rupture the 

platelet adhesion causes the clotting cascade to contact with the lipid pool causing a thrombus to form. This 
thrombus will eventually grow and travel throughout the body. The thrombus will travel through different arteries 
and veins and eventually become lodged in an area that narrows. Once the area is blocked, blood and oxygen will not 
be able to supply the vessels and will cause death of cells and lead to necrosis and poisoning. Serious complicated 
plaques can cause death of organ tissues, causing serious complications to that organ system. 

Atherosclerosis 23 1 

Greater than 75% lumen stenosis used to be considered by cardiologists as the hallmark of clinically significant 
disease because it is typically only at this severity of narrowing of the larger heart arteries that recurring episodes of 
angina and detectable abnormalities by stress testing methods are seen. However, clinical trials have shown that only 
about 14% of clinically-debilitating events occur at locations with this, or greater severity of stenosis. The majority 
of events occur due to atheroma plaque rupture at areas without narrowing sufficient enough to produce any angina 
or stress test abnormalities. Thus, since the later-1990s, greater attention is being focused on the "vulnerable 

Though any artery in the body can be involved, usually only severe narrowing or obstruction of some arteries, those 
that supply more critically-important organs are recognized. Obstruction of arteries supplying the heart muscle result 
in a heart attack. Obstruction of arteries supplying the brain result in a stroke. These events are life-changing, and 
often result in irreversible loss of function because lost heart muscle and brain cells do not grow back to any 
significant extent, typically less than 2%. 

Over the last couple of decades, methods other than angiography and stress-testing have been increasingly developed 
as ways to better detect atherosclerotic disease before it becomes symptomatic. These have included both (a) 
anatomic detection methods and (b) physiologic measurement methods. 

Examples of anatomic methods include: (1) coronary calcium scoring by CT, (2) carotid IMT (intimal media 
thickness) measurement by ultrasound, and (3) IVUS. 

Examples of physiologic methods include: (1) lipoprotein subclass analysis, (2) HbAlc, (3) hs-CRP, and (4) 

The example of the metabolic syndrome combines both anatomic (abdominal girth) and physiologic (blood pressure, 
elevated blood glucose) methods. 

Advantages of these two approaches: The anatomic methods directly measure some aspect of the actual 
atherosclerotic disease process itself, thus offer potential for earlier detection, including before symptoms start, 
disease staging and tracking of disease progression. The physiologic methods are often less expensive and safer and 
changing them for the better may slow disease progression, in some cases with marked improvement. 

Disadvantages of these two approaches: The anatomic methods are generally more expensive and several are 
invasive, such as IVUS. The physiologic methods do not quantify the current state of the disease or directly track 
progression. For both, clinicians and third party payers have been slow to accept the usefulness of these newer 


If atherosclerosis leads to symptoms, some symptoms such as angina pectoris can be treated. Non-pharmaceutical 
means are usually the first method of treatment, such as cessation of smoking and practicing regular exercise. If 
these methods do not work, medicines are usually the next step in treating cardiovascular diseases, and, with 
improvements, have increasingly become the most effective method over the long term. However, medicines are 
criticized for their expense, patented control and occasional undesired effects. 


In general, the group of medications referred to as statins has been the most popular and are widely prescribed for 
treating atherosclerosis. They have relatively few short-term or longer-term undesirable side-effects, and multiple 
comparative treatment/placebo trials have fairly consistently shown strong effects in reducing atherosclerotic disease 
'events' and generally -25% comparative mortality reduction in clinical trials, although one study design, 
ALLHAT, was less strongly favorable. 

The newest statin, rosuvastatin, has been the first to demonstrate regression of atherosclerotic plaque within the 
coronary arteries by IVUS (intravascular ultrasound evaluation). The study was set up to demonstrate effect 

Atherosclerosis 232 

primarily on atherosclerosis volume within a 2 year time-frame in people with active/symptomatic disease (angina 
frequency also declined markedly) but not global clinical outcomes, which was expected to require longer trial time 
periods; these longer trials remain in progress. 

However, for most people, changing their physiologic behaviors, from the usual high risk to greatly reduced risk, 
requires a combination of several compounds, taken on a daily basis and indefinitely. More and more human 
treatment trials have been done and are ongoing that demonstrate improved outcome for those people using 
more-complex and effective treatment regimens that change physiologic behaviour patterns to more closely resemble 
those that humans exhibit in childhood at a time before fatty streaks begin forming. 

The statins, and some other medications, have been shown to have antioxidant effects, possibly part of their basis for 
some of their therapeutic success in reducing cardiac 'events'. 

The success of statin drugs in clinical trials is based on some reductions in mortality rates, however by trial design 

biased toward men and middle-age, the data is as, as yet, less strongly clear for women and people over the age of 

70. For example, in the Scandinavian Simvastatin Survival Study (4S), the first large placebo controlled, 

randomized clinical trial of a statin in people with advanced disease who had already suffered a heart attack, the 

overall mortality rate reduction for those taking the statin, vs. placebo, was 30%. For the subgroup of people in the 

trial that had Diabetes Mellitus, the mortality rate reduction between statin and placebo was 54%. 4S was a 5.4-year 

trial that started in 1989 and was published in 1995 after completion. There were 3 more dead women at trial's end 

on statin than in the group on placebo drug whether chance or some relation to the statin remains unclear. The 

ASTEROID trial has been the first to show actual disease volume regression (see page 8 of the paper, which 

shows cross-sectional areas of the total heart artery wall at start and 2 years of rosuvastatin 40 mg/day treatment); 

however, its design was not able to "prove" the mortality reduction issue since it did not include a placebo group, the 

individuals offered treatment within the trial had advanced disease and promoting a comparison placebo arm was 

judged to be unethical. 

Primary and secondary prevention 

Combinations of statins, niacin, intestinal cholesterol absorption-inhibiting supplements (ezetimibe and others, and 
to a much lesser extent fibrates) have been the most successful in changing common but sub-optimal lipoprotein 
patterns and group outcomes. In the many secondary prevention and several primary prevention trials, several classes 
of lipoprotein-expression-altering (less correctly termed "cholesterol-lowering") agents have consistently reduced not 
only heart attack, stroke and hospitalization but also all-cause mortality rates. The first of the large secondary 

prevention comparative statin/placebo treatment trials was the Scandinavian Simvastatin Survival Study (4S) with 

over fifteen more studies extending through to the more recent ASTEROID trial published in 2006. The first 

primary prevention comparative treatment trial was AFCAPS/TexCAPS with multiple later comparative 

statin/placebo treatment trials including EXCEL, [35] ASCOT [36] and SPARCL. [37] [38] While the statin trials have all 

been clearly favorable for improved human outcomes, only ASTEROID showed evidence of atherosclerotic 

regression (slight). Both human and animal trials that showed evidence of disease regression used more aggressive 


combination agent treatment strategies, which nearly always included niacin. 

Diet and dietary supplements 

Niacin (vitamin B ), in pharmacologic doses, (generally 1,000 to 3,000 mg/day), sold in many OTC and prescription 
formulations, tends to improve (a) HDL levels, size and function, (b) shift LDL particle distribution to larger particle 
size and (c) lower lipoprotein(a), an atherosclerosis promoting genetic variant of LDL. Additionally, individual 
responses to daily niacin, while mostly evident after a month at effective doses, tends to continue to slowly improve 
further over time. (However, careful patient understanding of how to achieve this without nuisance symptoms is 
needed, though not often achieved.) Research work on increasing HDL particle concentration and function, beyond 
the usual niacin effect/response, even more important, is slowly advancing. 

Atherosclerosis 233 

Dietary changes to achieve benefit have been more controversial, generally far less effective and less widely adhered 
to with success. One key reason for this is that most cholesterol, typically 80-90%, within the body is created and 
controlled by internal production by all cells in the body (true of all animals), with typically slightly greater relative 
production by hepatic/liver cells. (Cell structure relies on fat membranes to separate and organize intracellular water, 
proteins and nucleic acids and cholesterol is one of the components of all animal cell membranes.) 

While the absolute production quantities vary with the individual, group averages for total human body content of 
cholesterol within the U.S. population commonly run about 35,000 mg (assuming lean build; varies with body 
weight and build) and about 1,000 mg/day ongoing production. Dietary intake plays a smaller role, 200—300 mg/day 
being common values; for pure vegetarians, essentially mg/day, but this typically does not change the situation 
very much because internal production increases to largely compensate for the reduced intake. For many, especially 
those with greater than optimal body mass and increased glucose levels, reducing carbohydrate (especially simple 
forms) intake, not fats or cholesterol, is often more effective for improving lipoprotein expression patterns, weight 
and blood glucose values. For this reason, medical authorities much less frequently promote the low dietary fat 
concepts than was commonly the case prior to about year 2005. However, evidence has increased that processed, 
particularly industrial non-enzymatic hydrogenation produced trans fats, as opposed to the natural cis-configured 
fats, which living cells primarily produce, is a significant health hazard. 

Dietary supplements of Omega-3 oils, especially those from the muscle of some deep salt water living fish species, 
also have clinical evidence of significant protective effects as confirmed by 6 double blind placebo controlled human 
clinical trials. 

Less robust evidence shows that homocysteine and uric acid levels, including within the normal range, promote 
atherosclerosis and that lowering these levels is helpful. 

In animals Vitamin C deficiency has been confirmed as an important role in development of hypercholesterolemia 

and atherosclerosis, but due to ethical reasons placebo-controlled human studies are impossible to do. Vitamin C 

acts as an antioxidant in vessels and inhibits inflammatory process. It has therapeutic properties on high blood 

[41] [421 [431 

pressure and its fluctuation, and arterial stiffness in diabetes. Vitamin C is also a natural regulator of 

cholesterol and higher doses (over 150 mg/kg daily) may confer significant protection against atherosclerosis 

even in 

the situation of elevated cholesterol levels. 

The scale of vitamin C benefits on cardiovascular system led several authors to theorize that vitamin C deficiency is 

the primary cause of cardiovascular diseases. The theory was unified by twice Nobel prize winner Linus Pauling 

and Matthias Rath. They point out that vitamin C is produced by all mammals except mankind and the great apes. 

This is due to a genetic deficiency that arose with the common ancestor of human and apes. To survive humans and 

apes must eat sufficient vitamin C each day. Without vitamin C humans develop scurvy. Vitamin C is an essential 

element in insuring that the vascular system is strong and flexible. Pauling and Matthias suggest that a deficiency 

causes weakness in the arterial system and the body compensates by trying to stiffen up the artery walls using other 

common blood elements. This causes the effect known as atherosclerosis. They suggest that clinical manifestations 

of cardiovascular diseases are merely overshoot of body defense mechanisms that are involved in stabilisation of 

vascular wall after it is weakened by the vitamin C deficiency and the subsequent collagen degradation. They discuss 

several metabolic and genetic predispositions (our inability to produce vitamin C at all being the main one) and their 


The Unified Theory of Human Cardiovascular Disease suggests that atherosclerosis may be reversed and cured but 
no testing or trials have yet been started to test Pauling's vitamin C theory. 

Trials on Vitamin E have been done, but they have failed to find a beneficial effect, for various reasons, but for some 

patients at high risk for atherosclerosis there may be some benefits. 

Menaquinone (Vitamin K ), but not phylloquinone (Vitamin K ), intake is associated with reduced risk of CHD 
mortality, all-cause mortality and severe aortic calcification. 

Atherosclerosis 234 

Excess iron may be involved in the development of atherosclerosis, but one study found reducing body iron 

stores in patients with symptomatic peripheral artery disease through phlebotomy did not significantly decrease 
all-cause mortality or death plus nonfatal myocardial infarction and stroke. Further studies may be warranted. 

Changes in diet may help prevent the development of atherosclerosis. Researchers at the Agricultural Research 
Service have found that avenanthramides, chemical compounds found in oats, may help reduce the inflammation of 
the arterial wall, which contributes to the development of atherosclerosis. Avenanthramides have anti-inflammatory 
properties that are linked to activating proinflammatory cytokines. Cytokines are proteins that are released by the cell 
to protect and repair tissues. Researchers found that these compounds in oats have the ability to reduce inflammation 
and therefore help prevent atherosclerosis. [56] 

Surgical intervention 

Other physical treatments, helpful in the short term, include minimally invasive angioplasty procedures that may 

include stents to physically expand narrowed arteries and major invasive surgery, such as bypass surgery, to 

create additional blood supply connections that go around the more severely narrowed areas. 


Patients at risk for atherosclerosis-related diseases are increasingly being treated prophylactically with low-dose 


aspirin and a statin. The high incidence of cardiovascular disease led Wald and Law to propose a Polypill, a 
once-daily pill containing these two types of drugs in addition to an ACE inhibitor, diuretic, beta blocker, and folic 
acid. They maintain that high uptake by the general population by such a Polypill would reduce cardiovascular 
mortality by 80%. It must be emphasized however that this is purely theoretical, as the Polypill has never been tested 
in a clinical trial. 

Medical treatments often focus predominantly on the symptoms. However, over time, clinical trials have shown 
treatments that focus on decreasing the underlying atherosclerosis processes — as opposed to simply treating 
symptoms — more effective. 

In summary, the key to the more effective approaches has been better understanding of the widespread and insidious 
nature of the disease and to combine multiple different treatment strategies, not rely on just one or a few approaches. 
In addition, for those approaches, such as lipoprotein transport behaviors, which have been shown to produce the 
most success, adopting more aggressive combination treatment strategies has generally produced better results, both 
before and especially after people are symptomatic. 

Because many blood thinners, particularly warfarin and salicylates such as aspirin, thin the blood by interfering with 
Vitamin K, there is recent evidence that blood thinners that work by this mechanism can actually worsen arterial 
calcification in the long term even though they thin the blood in the short term. 


Lipoprotein imbalances, upper normal and especially elevated blood sugar, i.e., diabetes and high blood pressure are 
risk factors for atherosclerosis; homocysteine, stopping smoking, taking anticoagulants (anti-clotting agents), which 
target clotting factors, taking omega-3 oils from fatty fish or plant oils such as flax or canola oils, exercising and 
losing weight are the usual focus of treatments that have proven to be helpful in clinical trials. The target serum 
cholesterol level is ideally equal or less than 4 mmol/L (160 mg/dL), and triglycerides equal or less than 2 mmol/L 
(180 mg/dL). 

Evidence has increased that people with diabetes, despite not having clinically-detectable atherosclotic disease, have 
more severe debility from atherosclerotic events over time than even non-diabetics that have already suffered 
atherosclerotic events. Thus diabetes has been upgraded to be viewed as an advanced atherosclerotic disease 

Atherosclerosis 235 


An indication of the role of HDL on atherosclerosis has been with the rare Apo-Al Milano human genetic variant of 
this HDL protein. A small short-term trial using bacterial synthetized human Apo-Al Milano HDL in people with 
unstable angina produced fairly dramatic reduction in measured coronary plaque volume in only 6 weeks vs. the 
usual increase in plaque volume in those randomized to placebo. The trial was published in JAMA in early 2006. 
Ongoing work starting in the 1990s may lead to human clinical trials — probably by about 2008. These may use 
synthesized Apo-Al Milano HDL directly. Or they may use gene-transfer methods to pass the ability to synthesize 
the Apo-Al Milano HDLipoprotein. 

Methods to increase high-density lipoprotein (HDL) particle concentrations, which in some animal studies largely 
reverses and remove atheromas, are being developed and researched. 

Niacin has HDL raising effects (by 10 - 30%) and showed clinical trial benefit in the Coronary Drug Project and is 
commonly used in combination with other lipoprotein agents to improve efficacy of changing lipoprotein for the 
better. However most individuals have nuisance symptoms with short term flushing reactions, especially initially, 
and so working with a physician with a history of successful experience with niacin implementation, careful 
selection of brand, dosing strategy, etc. are usually critical to success. 

However, increasing HDL by any means is not necessarily helpful. For example, the drug torcetrapib is the most 
effective agent currently known for raising HDL (by up to 60%). However, in clinical trials it also raised deaths by 
60%. All studies regarding this drug were halted in December 2006. 

The ERASE trial is a newer trial of an HDL booster, which has shown promise. 

The ASTEROID trial used a high-dose of rosuvastatin — the statin with typically the most potent dose/response 
correlation track record (both for LDLipoproteins and HDLipoproteins.) It found plaque (intima + media volume) 
reduction. Several additional rosuvastatin treatment/placebo trials for evaluating other clinical outcomes are in 

The actions of macrophages drive atherosclerotic plaque progression. Immunomodulation of atherosclerosis is the 
term for techniques that modulate immune system function to suppress this macrophage action. 
Immunomodulation has been pursued with considerable success in both mice and rabbits since about 2002. Plans for 
human trials, hoped for by about 2008, are in progress. 

Research on genetic expression and control mechanisms is progressing. Topics include 

• PPAR, known to be important in blood sugar and variants of lipoprotein production and function; 

• The multiple variants of the proteins that form the lipoprotein transport particles. 

Some controversial research has suggested a link between atherosclerosis and the presence of several different 
nanobacteria in the arteries, e.g., Chlamydophila pneumoniae, though trials of current antibiotic treatments known to 

be usually effective in suppressing growth or killing these bacteria have not been successful in improving 


The immunomodulation approaches mentioned above, because they deal with innate responses of the host to 
promote atherosclerosis, have far greater prospects for success. 

Atherosclerosis 236 

See also 

• Angiogram 

• Arterial stiffness 

• Atheroma 

• Chelation therapy 

• Coronary circulation 

• Coronary catheterization 

• Fatty streaks 

• Monckeberg's arteriosclerosis 

• Intravascular ultrasound 








[8] Maton, Anthea; Roshan L. Jean Hopkins, Charles William McLaughlin, Susan Johnson, Maryanna Quon Warner, David LaHart, Jill D. 

Wright (1993). Human Biology and Health. Englewood Cliffs, New Jersey, USA: Prentice Hall. ISBN 0-13-981 176-1. OCLC 32308337. 
[9] Glagov S, Weisenberg E, Zarins CK, Stankunavicius R, Kolettis GJ (May 1987). "Compensatory enlargement of human atherosclerotic 

coronary arteries". N. Engl. J. Med. 316 (22): 1371-5. doi:10.1056/NEJM198705283162204. PMID 3574413. 
[10] Nissen. "Effect of Very High-Intensity Statin Therapy on Regression of Coronary Atherosclerosis— The ASTEROID Trial" (http://jama.;295/13/1556.pdf?ijkey=Md42dlk7z9TzyL8&keytype=finite). JAMA. . 

[II] Fabricant CG, Fabricant J (1999). "Atherosclerosis induced by infection with Marek's disease herpesvirus in chickens". Am Heart J 138 (5 
Pt 2): S465-8. doi:10.1016/S0002-8703(99)70276-0. PMID 10539849. 

[12] Hsu HY, Nicholson AC, Pomerantz KB, Kaner RJ, Hajjar DP (August 1995). "Altered cholesterol trafficking in herpesvirus-infected arterial 

cells. Evidence for viral protein kinase- mediated cholesterol accumulation" (http://www.jbc. org/cgi/pmidlookup?view=long& 

pmid=7642651). J Biol Chem 270 (33): 19630-7. doi: 10. 1074/jbc.270.33. 19630. PMID 7642651. . 
[13] Cheng J, Ke Q, Jin Z, Wang H, Kocher O, Morgan JP, Zhang J, Crumpacker CS (May 2009). "Cytomegalovirus Infection Causes an 

Increase of Arterial Blood Pressure" (http://www.plospathogens.Org/article/info:doi/10.1371/journal.ppat.1000427). PLoS Pathog 5 (5): 

el000427. doi:10.1371/journal.ppat.l000427. PMID 19436702. PMC 2673691. . 
[14] Blankenhorn DH, Hodis HN (August 1993). "Atherosclerosis— reversal with therapy" ( 

fcgi?tool=pmcentrez&artid=1022223). The Western journal of medicine 159 (2): 172-9. PMID 8212682. PMC 1022223. 
[15] Mitchell, Richard Sheppard; Kumar, Vinay; Abbas, Abul K; Fausto, Nelson (2007). Robbins Basic Pathology: With STUDENT CONSULT 

Online Access. Philadelphia: Saunders, pp. 345. ISBN 1-4160-2973-7. 8th edition. 
[16] Narain VS, Gupta N, Sethi R, Puri A, Dwivedi SK, Saran RK, Puri VK. Clinical correlation of multiple biomarkers for risk assessment in 

patients with acute coronary syndrome. Indian Heart J. 2008 Nov-Dec;60(6):536-42. 
[17] Bhatt DL, Topol EJ (July 2002). "Need to test the arterial inflammation hypothesis" ( 

circulationaha;106/l/136). Circulation 106 (1): 136^0. doi:10.1161/01.CIR.0000021112.29409.A2. PMID 12093783. . 
[18] Griffin M, Frazer A, Johnson A, Collins P, Owens D, Tomkin GH (1998). "Cellular cholesterol synthesis— the relationship to post-prandial 

glucose and insulin following weight loss". Atherosclerosis. 138 (2): 313-8. doi:10.1016/S0021-9150(98)00036-7. PMID 9690914. 
[19] King, Cr; Knutson, Kl; Rathouz, Pj; Sidney, S; Liu, K; Lauderdale, Ds (December 2008). "Short sleep duration and incident coronary artery 

calcification." ( JAMA : the journal of the 

American Medical Association 300 (24): 2859-66. doi:10.1001/jama.2008.867. PMID 19109114. PMC 2661105. 
[20] "Food Pyramids: Nutrition Source, Harvard School of Public Health" ( . 

Retrieved 2007-11-25. 
[21] Taubes G (March 2001). "Nutrition. The soft science of dietary fat" ( 

pmid=l 1286266). Science (New York, NY.) 291 (5513): 2536-45. doi: 10.1 126/science.291.5513.2536. PMID 11286266. . 
[22] Song JH, Fujimoto K, Miyazawa T (2000). "Polyunsaturated (n-3) fatty acids susceptible to peroxidation are increased in plasma and tissue 

lipids of rats fed docosahexaenoic acid-containing oils". J. Nutr. 130 (12): 3028-33. PMID 11110863. 
[23] Yap SC, Choo YM, Hew NF, et al. (1995). "Oxidative susceptibility of low density lipoprotein from rabbits fed atherogenic diets containing 

coconut, palm, or soybean oils". Lipids 30 (12): 1145-50. doi:10.1007/BF02536616. PMID 8614305. 

Atherosclerosis 237 

[24] Greco AV, Mingrone G (1990). "Serum and biliary lipid pattern in rabbits feeding a diet enriched with unsaturated fatty acids". Exp Pathol 

40 (1): 19-33. PMID 2279534. 
[25] Mattes RD (2005). "Fat taste and lipid metabolism in humans" ( 1-9384(05)00397-5). 

Physiol. Behav. 86 (5): 691-7. doi:10.1016/j.physbeh.2005.08.058. PMID 16249011. . "The rancid odor of an oxidized fat is readily 

[26] Dobarganes C, Marquez-Ruiz G (2003). "Oxidized fats in foods". Curr Opin Clin Nutr Metab Care 6 (2): 157-63. 

doi:10.1097/ (inactive 2008-06-27). PMID 12589185. 
[27] How Bad Are Cooking Oils? ( by Udo Erasmus, PhD 
[28] "Coronary atherosclerosis - the fibrous plaque with calcification" ( 

php). . Retrieved 2010-03-25. 
[29] Maseri A, Fuster V (2003). "Is there a vulnerable plaque?". Circulation 107 (16): 2068-71. doi:10.1161/01.CIR.0000070585.48035.Dl. 

PMID 12719286. 
[30] The Allhat Officers And Coordinators For The Allhat Collaborative Research Group, (2002). "Major outcomes in moderately 

hypercholesterolemic, hypertensive patients randomized to pravastatin vs usual care: The Antihypertensive and Lipid-Lowering Treatment to 

Prevent Heart Attack Trial (ALLHAT-LLT)". JAMA 288 (23if the ti is): 2998-3007. doi:10.1001/jama.288.23.2998. PMID 12479764. 
[31] Vos E, Rose CP (November 2005). "Questioning the benefits of statins" ( CMAJ 

173 (10): 1207; author reply 1210. doi: 10. 1503/cmaj. 1050120. PMID 16275976. PMC 1277053. . 
[32] T. E. Strandberg, S. Lehto, K. Pyorala, A. Kesaniemi, H. Oksa (1997-01-1 1). "Cholesterol lowering after participation in the Scandinavian 

Simvastatin Survival Study (4S) in Finland" ( European Heart 

Journal 18 (18(11)): 1725-1727;. PMID 9402446. . Retrieved 2007-11-18. 
[33] Nissen SE, Nicholls SJ, Sipahi I, et al. (2006). "Effect of very high-intensity statin therapy on regression of coronary atherosclerosis: the 

ASTEROID trial" (;295/13/1556.pdf?ijkey=Md42dlk7z9TzyL8&keytype=finite) (PDF). 

JAMA 295 (13): 1556-65. doi:10.1001/jama.295.13.jpc60002. PMID 16533939. . 
[34] Downs JR, Clearfield M, Weis S, et al. (May 1998). "Primary prevention of acute coronary events with lovastatin in men and women with 

average cholesterol levels: results of AFCAPS/TexCAPS. Air Force/Texas Coronary Atherosclerosis Prevention Study" (http://jama. JAMA : the journal of the American Medical Association 279 (20): 1615—22. 

doi:10.1001/jama.279.20.1615. PMID 9613910. . 
[35] Bradford RH, Shear CL, Chremos AN, et al. (1991). "Expanded Clinical Evaluation of Lovastatin (EXCEL) study results. I. Efficacy in 

modifying plasma lipoproteins and adverse event profile in 8245 patients with moderate hypercholesterolemia". Arch. Intern. Med. 151 (1): 

43-9. doi:10.1001/archinte.l51.1.43. PMID 1985608. 
[36] Sever PS, Poulter NR, Dahlof B, et al. (2005). "Reduction in cardiovascular events with atorvastatin in 2,532 patients with type 2 diabetes: 

Anglo-Scandinavian Cardiac Outcomes Trial— lipid-lowering arm (ASCOT-LLA)". Diabetes Care 28 (5): 1151—7. 

doi: 10.2337/diacare.28.5. 1151. PMID 15855581. 
[37] Linda Brookes, MSc. "SPARCL: Stroke Prevention by Aggressive Reduction in Cholesterol Levels" ( 

viewarticle/536377). Medscape. . Retrieved 2007-11-19. 
[38] Amarenco P, Bogousslavsky J, Callahan AS, et al. (2003). "Design and baseline characteristics of the stroke prevention by aggressive 

reduction in cholesterol levels (SPARCL) study" (http://content. asp?Aktion=ShowAbstract& 

ProduktNr=224153&ArtikelNr=72562). Cerebrovascular diseases 16 (4): 389-95. doi: 10. 1159/000072562. PMID 14584489. . 
[39] Ginter E (2007). "Chronic vitamin C deficiency increases the risk of cardiovascular diseases". Bratisl Lek Listy 108 (9): 417—21. 

PMID 18225482. 
[40] Bohm F, Settergren M, Pernow J (2007). "Vitamin C blocks vascular dysfunction and release of interleukin-6 induced by endothelin- 1 in 

humans in vivo". Atherosclerosis 190 (2): 408-15. doi:10.1016/j.atherosclerosis.2006.02.018. PMID 16527283. 
[41] Sato K, Dohi Y, Kojima M, Miyagawa K, Takase H, Katada E, Suzuki S. (2006). "Effects of ascorbic acid on ambulatory blood pressure in 

elderly patients with refractory hypertension". Arzneimittelforschung 56 (7): 535—40. PMID 16927536. 
[42] Gladys Block, Christopher D Jensen, Edward P Norkus, Mark Hudes and Patricia B Crawford (2008). "Vitamin C in plasma is inversely 

related to blood pressure and change in blood pressure during the previous year in young Black and White women" (http://www.nutritionj. 

com/content/7/1/35). Nutrition Journal 7 (35): 535-40. doi: 10.1 186/1475-2891-7-35. PMID 19091068. PMC 2621233. . 
[43] Brian A. Mullan; Ian S. Young; Howard Fee; David R. McCance (2002). "Ascorbic Acid Reduces Blood Pressure and Arterial Stiffness in 

Type 2 Diabetes" (http://hyper.ahajournals.Org/cgi/content/abstract/40/6/804). Hypertension 40 (6): 804. 

doi:10.1161/01.HYP.0000039961. 13718.00. PMID 12468561. . 
[44] HJ Harwood Jr, YJ Greene and PW Stacpoole (June 5, 1986). "Inhibition of human leukocyte 3-hydroxy-3-methylglutaryl coenzyme A 

reductase activity by ascorbic acid. An effect mediated by the free radical monodehydroascorbate" ( 

16/7127). J. Biol. Chem. 261 (16): 7127-7135. PMID 3711081. . 
[45] Das S, Ray R, Snehlata, Das N, Srivastava LM (2006). "Effect of ascorbic acid on prevention of hypercholesterolemia induced 

atherosclerosis". Mol CellBiochem 285(1-2) (1-2): 143-7. doi:10.1007/sll010-005-9070-x. PMID 16479321. 
[46] KlennerF.R. (1974). "Significance of high daily intake of ascorbic acid in preventive medicine" ( 

ascorbate/197x/klenner-fr-j_int_assn_prev_med- 1974-vl-nl-p45.htm). J. Int. Acad. Prev. Med. 1: 45—49. . 
[47] Stone, I. (1972). The Healing Factor: Vitamin C Against Disease ( 

scienziati-docu/stone/HEALINGFACTVitaC-TUTTO-ING_file/HEALINGFACTVitaC-TUTTO-ING.htm). Grosset and Dunlap, New 

Atherosclerosis 238 

York/Perigee Books, published by The Putnam Publishing Group. ISBN 0-399-50764-7. . 
[48] Rath M, Pauling L (1992). "A Unified Theory of Human Cardiovascular Disease Leading the Way to the Abolition of This Disease as a 

Cause for Human Mortality" ( (PDF). Journal of 

Orthomolecular Medicine 7 (1): 5—15. . 
[49] Robinson I, de Serna DG, Gutierrez A, Schade DS (2006). "Vitamin E in humans: an explanation of clinical trial failure". Endocr Pract 12 

(5): 576-82. PMID 17002935. 
[50] Geleijnse JM, Vermeer C, Grobbee DE, et al. (2004). "Dietary intake of menaquinone is associated with a reduced risk of coronary heart 

disease: the Rotterdam Study". J. Nutr. 134 (11): 3100-5. PMID 15514282. 
[51] Erkkila AT, Booth SL (2008). "Vitamin K intake and atherosclerosis". Curr. Opin. Lipidol. 19 (1): 39^2. 

doi:10.1097/MOL.0b013e3282flc57f. PMID 18196985. 
[52] Wallin R, Schurgers L, Wajih N (2008). "Effects of the blood coagulation vitamin K as an inhibitor of arterial calcification" (http://www. Thromb. Res. 122 (3): 411. 

doi:10.1016/j.thromres.2007.12.005. PMID 18234293. PMC 2529147. 
[53] Brewer GJ (2007). "Iron and copper toxicity in diseases of aging, particularly atherosclerosis and Alzheimer's disease". Exp. Biol. Med. 

(Maywood) 232 (2): 323-35. PMID 17259340. 
[54] Sullivan JL, Mascitelli L (2007). "[Current status of the iron hypothesis of cardiovascular diseases]" (in Italian). Recenti Prog Med 98 (7-8): 

373-7. PMID 17685184. 
[55] Zacharski LR, Chow BK, Howes PS, et al. (2007). "Reduction of iron stores and cardiovascular outcomes in patients with peripheral arterial 

disease: a randomized controlled trial". JAMA 297 (6): 603-10. doi:10.1001/jama.297.6.603. PMID 17299195. 
[57] "Heart disease and stents" ( Cypher Stent. . Retrieved 

[58] Wald NJ, Law MR (June 2003). "A strategy to reduce cardiovascular disease by more than 80%" ( 

articlerender.fcgi?tool=pmcentrez&artid=162259). BMJ (Clinical research ed.) 326 (7404): 1419. doi:10.1136/bmj.326.7404.1419. 

PMID 12829553. PMC 162259. 
[59] Price PA, Faus SA, Williamson MK (February 2000). "Warfarin-induced artery calcification is accelerated by growth and vitamin D" (http:/ 

/ Arteriosclerosis, thrombosis, and vascular biology 20 (2): 317—27. 

PMID 10669626. . 
[60] Geleijnse JM, Vermeer C, Grobbee DE, et al. (November 2004). "Dietary intake of menaquinone is associated with a reduced risk of 

coronary heart disease: the Rotterdam Study" ( J Nutr. 134 (11): 3100—5. 

PMID 15514282. . 
[61] "Linus Pauling Institute at Oregon State University" ( . 

Retrieved 2010-03-25. 
[62] "" ( . Retrieved 2010-03-25. 
[63] Barter PJ, Caulfield M, Eriksson M, et al. (November 2007). "Effects of torcetrapib in patients at high risk for coronary events" (http:// N Engl J Med. 357 (21): 2109-22. doi:10.1056/NEJMoa0706628. PMID 17984165. . 
[64] Sue Hughes (March 26, 2007). "ERASE: New HDL mimetic shows promise" ( HeartWire. . 
[65] Jan Nilsson; Goran K. Hansson; Prediman K. Shah (2005). "Immunomodulation of Atherosclerosis — Implications for Vaccine 

Development — ATVB In Focus" (;25/l/18). Arteriosclerosis, Thrombosis, and 

Vascular Biology 5 (1): 18-28. doi:10.1161/01.ATV.0000149142.42590.a2. PMID 15514204. . 
[66] M Stitzinger (2007). "Lipids, inflammation and atherosclerosis" ( 

pdf) (pdf). The digital repository of Leiden University. . Retrieved 2007-11-02. "Results of clinical trials investigating anti-chlamydial 

antibiotics as an addition to standard therapy in patients with coronary artery disease have been inconsistent. Therefore, Andraws et al. 

conducted a meta- analysis of these clinical trials and found that evidence available to date does not demonstrate an overall benefit of 

antibiotic therapy in reducing mortality or cardiovascular events in patients with coronary artery disease. " 

External links 

• Atherosclerosis ( 
Vascular_Disorders/Atherosclerosis//) at the Open Directory Project 

• A four-minute animation of the atherosclerosis process, entitled "Pathogenesis of Acute MI," commissioned by 
Paul M. Ridker, MD, MPH, FACC, FAHA, at the Harvard Medical School, can be viewed at (http:/ 

Interventional cardiology 239 

Interventional cardiology 

Interventional cardiology is a branch of the medical specialty of cardiology that deals specifically with the catheter 
based treatment of structural heart diseases. Andreas Gruentzig is considered the father of interventional 

A large number of procedures can be performed on the heart by catheterization. This most commonly involves the 
insertion of a sheath into the femoral artery (but, in practice, any large peripheral artery or vein) and cannulating the 
heart under X-ray visualization (most commonly fluoroscopy, a real-time x-ray. The radial artery may also be used 
for cannulation; this approach offers several advantages, including the accessibility of the artery in most patients, the 
easy control of bleeding even in anticoagulated patients, the enhancement of comfort because patients are capable of 
sitting up and walking immediately following the procedure, and the near absence of clinically significant sequelae 
in patients with a normal Allen test. 

The main advantages of using the interventional cardiologic approach is the avoidance of the scars, pain, and long 
postoperative recovery associated with surgery. Additionally, the interventional cardiology procedure of primary 
angioplasty is now the gold standard of care for an acute myocardial infarction. It involves the extraction of clots 
from occluded coronary arteries, deployment of stents and balloons through a small hole made into a major artery, 
leaving no scars, which has given it the name "pin-hole surgery" (as opposed to "key-hole surgery"). 

Procedures performed by specialists in interventional cardiology: 


Also called percutaneous transluminal coronary angioplasty (PTCA), angioplasty is an intervention for the 
treatment of coronary artery disease. 


Valvuloplasty is the dilation of narrowed cardiac valves (usually mitral, aortic, or pulmonary). 

Congenital heart defect correction 

Percutaneous approaches can be employed to correct atrial septal and ventricular septal defects, closure of a 
patent ductus arteriosus, and angioplasty of the great vessels. 

Percutaneous valve replacement: An alternative to open heart surgery, percutaneous valve replacement is the 
replacement of a heart valve using percutaneous methods. 

Coronary thrombectomy 

Coronary thrombectomy involves the removal of a thrombus (blood clot) from the coronary arteries. 

Cardiac ablation 

A technique performed by clinical electrophysiologists, cardiac ablation is used in the treatment of 

Surgery of the heart is done by the specialty of cardiothoracic surgery. Some interventional cardiology procedures 
are only performed when there is cardiothoracic surgery expertise in the hospital, in case of complications. 

Interventional cardiology 240 

See also 

• Interventional radiology 

• Catheter 

• Cannula 

• Stent 

• Restenosis 


[1] Lakhan SE, Kaplan A, Laird C, Leiter Y (2009). "The interventionalism of medicine: interventional radiology, cardiology, and 

neuroradiology" (http://www.puhmedcentral.nih. gov/ articlerender.fcgi?tool=pmcentrez&artid=2745361). International Archives of 

Medicine 2 (27): 27. doi:10.1186/1755-7682-2-27. PMID 19740425. PMC 2745361. 
[2] Hurst, J. Willis; Fuster, Valentin; O'Rourke, Robert A. (2004). Hurst's The Heart ( 

pg=RA2-PA481#PRA2-PA484,Ml). New York: McGraw-Hill, Medical Publishing Division, pp. 484. ISBN 0-07-142264-1. . 
[3] "Evanston Northwestern Hospital Interventional Cardiology" ( 

procedures/default.aspx?id=3631). . Retrieved 2008-03-06. 

External links 

• ( 

• Angioplasty. Org ( 

• European Association of Percutaneous Cardiovascular Interventions (EAPCI) ( 

• Interventional Portal: Relevant links for interventional cardiologists ( 

• ( 


Non-invasive Cardiology Techniques and 



A SQUID (for superconducting quantum interference device) is a ■■^^^^H o^* nr\ 

very sensitive magnetometer used to measure extremely weak __ # . 

magnetic fields, based on superconducting loops containing Josephson 


SQUIDs are sensitive enough to measure fields as low as 5 aT 

(5x1 0~ T) within a few days of averaged measurements. Their 

noise levels are as low as 3 fT-Hz" 2 . For comparison, a typical *>iyn I GmpGraZUlG 

refrigerator magnet produces 0.01 tesla (10~ T), and some processes OtVO©/'CO/7Cr£/Crf V7C 

in animals produce very small magnetic fields between 10~ T and 



10 T. Recently invented SERF atomic magnetometers are potentially Sensing element of the SQUID 

more sensitive and do not require cryogenic refrigeration but are orders 


of magnitude larger in size (-1 cm ) and must be operated in a near-zero magnetic field. 

History and design 

There are two main types of SQUID: direct current (DC) and radio frequency (RF). RF SQUIDs can work with only 
one Josephson junction, which might make them cheaper to produce, but are less sensitive. 


The DC SQUID was invented in 1964 by Robert Jaklevic, John J. Lambe, James Mercereau, and Arnold Silver of 
Ford Research Labs after Brian David Josephson postulated the Josephson effect in 1962 and the first Josephson 
Junction was made by John Rowell and Philip Anderson at Bell Labs in 1963. It has two Josephson junctions in 
parallel in a superconducting loop. It is based on the DC Josephson effect. In the absence of any external magnetic 
field, the input current / splits into the two branches equally. Now, consider if a small amount of external flux is 
applied to the superconducting loop. This results in the screening currents that generate the magnetic field to cancel 
this applied external flux. The current in one of the branches of the superconducting loop is in the direction of/, and 
is equal to 1 12+ 1 12 and in the second branch is in the opposite direction of / and is equal to 1/2- I /2. As soon as the 
current in any one of the branches exceeds the critical current for the Josephson junction, the superconducting ring 
becomes resistive and a voltage appears across the junction. Now consider if the external flux is further increased 
and it now exceeds <t>J2. Since the flux enclosed by the superconducting loop must be an integral number of the flux 
quanta, in this case the SQUID instead of screening the flux, energetically prefers to increase it to & . The screening 
current now flows in the opposite direction. Thus the screening current changes direction every time the flux 
increases by half integer multiples of & . Thus the critical current oscillates as a function of the applied flux. If the 
input current is more than / , then the SQUID always operates in the resistive mode. The voltage in this case is thus a 
function of the applied magnetic field and the period equal to <t> . Since the current-voltage characteristics of the DC 
SQUID is hysteritic, a shunt resistance, R is connected across the junction to eliminate the hysteresis (although in the 
case of copper oxide based high temperature superconductors the junction's own intrinsic resistance is usually 
sufficient). The screening current is the applied flux divided by the self inductance of the ring. Thus A<Z> can be 



estimated as the function of AV (flux to voltage converter). . 


21=2 tS.<PIL, where L is the self inductance of the superconducting ring 


The RF SQUID was invented in 1965 by Robert Jaklevic, John J. 

Lambe, Arnold Silver, and James Edward Zimmerman at Ford. It is 

based on the AC Josephson effect and uses only one Josephson 

junction. It is less sensitive compared to DC SQUID but is cheaper and 

easier to manufacture in smaller quantities. The major part of 

fundamental measurements in biomagnetism even of extremely small 

signals have been performed, using RF SQUIDS (Tonotopic 

representation of the auditory cortex by Romani and Williamson 1980, 

brainstem auditory evoced magnetic field by Erne et al. 1987 ). The 

RF SQUID is inductively coupled to a resonant tank circuit. Depending 

on the external magnetic field, as the SQUID operates in the resistive 

mode, the effective inductance of the tank circuit changes, thus 

changing the resonant frequency of the tank circuit. These frequency measurements can be easily done and thus the 

losses which appear as the voltage across the load resistor in the circuit are a periodic function of the applied 





■' m m 


A prototype of a semiconductor SQUID 

magnetic flux with a period of <P . For a precise mathematical description refer to the original paper by Erne et al 



The traditional superconducting materials for SQUIDs are pure niobium or a lead alloy with 10% gold or indium, as 
pure lead is unstable when its temperature is repeatedly changed. To maintain superconductivity, the entire device 
needs to operate within a few degrees of absolute zero, cooled with liquid helium. 

"High temperature" SQUID sensors are more recent; they are made of high temperature superconductors, 
particularly YBCO, and are cooled by liquid nitrogen which is cheaper and more easily handled than liquid helium. 
They are less sensitive than conventional "low temperature" SQUIDs but good enough for many applications. 


The extreme sensitivity of SQUIDs makes them ideal for studies in 

biology. Magnetoencephalography (MEG), for example, uses 

measurements from an array of SQUIDs to make inferences about 

neural activity inside brains. Because SQUIDs can operate at 

acquisition rates much higher than the highest temporal frequency of 

interest in the signals emitted by the brain (kHz), MEG achieves good 

temporal resolution. Another area where SQUIDs are used is 

magnetogastrography, which is concerned with recording the weak 

magnetic fields of the stomach. A novel application of SQUIDs is the 

magnetic marker monitoring method, which is used to trace the path of 

orally applied drugs. In the clinical environment SQUIDs are used in cardiology for magnetic field imaging (MFI), 

which detects the magnetic field of the heart for diagnosis and risk stratification. 

Probably the most common use of SQUIDs is in magnetic property measurement systems (MPMS). These are 
turn-key systems, made by several manufacturers, that measure the magnetic properties of a material sample. This is 

SQUID 243 

typically done over a temperature range from that of 4 K to roughly 190 K, though higher temperatures mean less 

For example, SQUIDs are being used as detectors to perform magnetic resonance imaging (MRI). While high field 
MRI uses precession fields of one to several teslas, SQUID-detected MRI uses measurement fields that lie in the 
microtesla regime. Since the MRI signal drops off as the square of the magnetic field, a SQUID is used as the 
detector because of its extreme sensitivity. The SQUID coupled to a second-order gradiometer and input circuit, 
along with the application of gradients are the fundamental entities which allows a research group to retrieve 
noninvasive images. SQUID-detected MRI has advantages over high field MRI systems such as the low cost 
required to build such a system and its compactness. The principle has been proven by imaging human extremities, 
and its future application may involve tumor screening. 

Another application is the scanning SQUID microscope, which uses a SQUID immersed in liquid helium as the 
probe. The use of SQUIDs in oil prospecting, mineral exploration, earthquake prediction and geothermal energy 
surveying is becoming more widespread as superconductor technology develops; they are also used as precision 
movement sensors in a variety of scientific applications, such as the detection of gravitational waves. Four SQUIDs 
were employed on Gravity Probe B in order to test the limits of the theory of general relativity. 

It has also been suggested that they might be implemented in a quantum computer. These are the only macroscopic 
devices that have been cited as possible qubits in this context. 

In fiction 

• The science fiction writer William Gibson made reference to SQUIDs in his 1981 story "Johnny Mnemonic", 
where a genetically engineered ex-military dolphin uses a SQUID implant to read a memory device in the title 
character's brain. 

• In the film Strange Days, SQUIDs are used to record and play back human memories, some of which are 
exchanged on the black market. 

• In Michael Crichton's 1999 novel Timeline, SQUIDs are mentioned as a part of the quantum teleportation device 
developed by ITC. 

• Jon Courtenay Grimwood's novel redRobe makes reference to SQUID probes being used to read memories and 
thoughts as part of a particularly invasive interrogation. 

• SQUIDs are used as part of an advanced polygraph/FMRI machine in Neal Stephenson's novel Snow Crash. 

• SQUIDS feature in the movie Chemical Wedding, where they are part of a supercomputer that is used to 
re-incarnate Aleister Crowley. 

• In the movie Watchmen the explosive devices employed by Ozymandias display the text "S.Q.U.I.D. Initializing." 
This may be a reference to the scenario of an alien squid like creature in the original Comic Book which the 
bombs replaced. 

See also 

Josephson Effect 

Spallation Neutron Source 

Mineral exploration 



Vibrating sample magnetometer 

SQUID 244 


[1] Ran, Shannon K'doah (2004) (PDF). Gravity Probe B: Exploring Einstein's Universe with Gyroscopes ( 

content/education/GP-B_T-Guide4-2008.pdf). NASA. p. 26. . 
[2] D. Drung, C. AGmann, J. Beyer, A. Kirste, M. Peters, F. Ruede, and Th. Schurig (2007). "Highly sensitive and easy-to-use SQUID sensors" 

( l/SQUID_Stromsensoren/Drung_ASC06_Preprint.pdf). IEEE Transactions on Applied 

Superconductivity 17 (2): 699. doi:10.1109/TASC.2007.897403. . 
[3] E. du Tremolete de Lacheisserie, D. Gignoux, and M. Schlenker (editors) (2005). Magnetism: Materials and Applications. 2. Springer. 
[4] J. Clarke and A. I. Braginski (Eds.) (2004). The SQUID handbook. 1. Wiley- Vch. 
[5] R. C. Jaklevic, J. Lambe, A. H. Silver, and J. E. Mercereau (1964). "Quantum Interference Effects in Josephson Tunneling". Phys. Rev. 

Letters 12: 159-160. doi:10.1103/PhysRevLett.l2.159. 
[6] S.N. Erne, M. Hoke, M. Lutkenhohner, C. Pantev, H.J. Scheer (1987). "Brainstem auditory evoked magnetic fields in response to stimulation 

with brief tone pulses". Int. J. Neuroscience 37: 1 15-125. doi: 10.3 109/00207458708987142. 
[7] S.N. Erne, H.-D. Hahlbohm, H. Lubbig (1976). "Theory of the RF biased Superconducting Quantum Interference Device for the non 

hysteretic regime". /. Appl. Phys. 47: 5440-5442. doi: 10. 1063/1.322574. 


Diabetes type 2 

Diabetes mellitus type 2 

Diabetes mellitus type 2 

Classification and external resources 



Universal blue circle symbol for diabetes. 


Ell. [2] 


250.00, 250.02 


3661 [3] 


000313 [4] 


article/117853 [5] 


D003924 [6] 

Diabetes mellitus type 2 — formerly non-insulin-dependent diabetes mellitus (NIDDM) or adult-onset 
diabetes — is a metabolic disorder that is characterized by high blood glucose in the context of insulin resistance and 
relative insulin deficiency. Diabetes is often initially managed by increasing exercise and dietary modification. As 
the condition progresses, medications may be needed. 

Unlike type 1 diabetes, there is very little tendency toward ketoacidosis though it is not unheard of. One effect that 
can occur is nonketonic hyperglycemia. Long-term complications from high blood sugar can include increased risk 
of heart attacks, strokes, amputation, and kidney failure. 

Diabetes mellitus type 2 246 

Signs and symptoms 

The classical symptoms of diabetes are polyuria (frequent urination), polydipsia (increased thirst), polyphagia 

(increased hunger), fatigue and weight loss. 


Type 2 diabetes is due primarily to lifestyle factors and genetics. It was also found that oligomers of islet amyloid 
polypeptide (IAPP), a protein that forms amyloid deposits in the pancreas during type 2 diabetes, triggered the 
NLRP3 inflammasome and generated mature IL-lp. One therapy for type 2 diabetes, glyburide, suppressed 
I APP-mediated IL- 1 13 production in vitro . 


A number of lifestyle factors are known to be important to the development of type 2 diabetes. In one study, those 
who had high levels of physical activity, a healthy diet, did not smoke, and consumed alcohol in moderation had an 
82% lower rate of diabetes. When a normal weight was included the rate was 89% lower. In this study a healthy diet 

was defined as one high in fiber, with a high polyunsaturated to saturated fat ratio, and a lower mean glycemic 

ri2i ri3i 

index. Obesity has been found to contribute to approximately 55% type 2 diabetes, and decreasing 

consumption of saturated fats and trans fatty acids while replacing them with unsaturated fats may decrease the 

risk. The increased rate of childhood obesity in between the 1960s and 2000s is believed to have led to the 


increase in type 2 diabetes in children and adolescents. 

Environmental toxins may contribute to recent increases in the rate of type 2 diabetes. A positive correlation has 
been found between the concentration in the urine of bisphenol A, a constituent of some plastics, and the incidence 
of type 2 diabetes. 

Medical conditions 

There are many factors which can potentially give rise to or exacerbate type 2 diabetes. These include obesity, 
hypertension, elevated cholesterol (combined hyperlipidemia), and with the condition often termed metabolic 
syndrome (it is also known as Syndrome X, Reavan's syndrome, or CHAOS). Other causes include acromegaly, 
Cushing's syndrome, thyrotoxicosis, pheochromocytoma, chronic pancreatitis, cancer and drugs. Additional factors 
found to increase the risk of type 2 diabetes include aging, high-fat diets and a less active lifestyle. 

Subclinical Cushing's syndrome (Cortisol excess) may be associated with type 1 diabetes. The percentage of 
subclinical Cushing's syndrome in the diabetic population is about 9%. Diabetic patients with a pituitary 


microadenoma can improve insulin sensitivity by removal of these microadenomas. 

Hypogonadism is often associated with Cortisol excess, and testosterone deficiency is also associated with type 2 

[221 [231 

diabetes, even if the exact mechanism by which testosterone improve insulin sensitivity is still not known. 


There is also a strong inheritable genetic connection in type 2 diabetes: having relatives (especially first degree) with 
type 2 increases risks of developing type 2 diabetes very substantially. In addition, there is also a mutation to the 

[241 [25] 

Islet Amyloid Polypeptide gene that results in an earlier onset, more severe, form of diabetes. 

About 55 percent of type 2 diabetes patients are obese at diagnosis — chronic obesity leads to increased insulin 
resistance that can develop into type 2 diabetes, most likely because adipose tissue (especially that in the abdomen 
around internal organs) is a (recently identified) source of several chemical signals to other tissues (hormones and 

Diabetes mellitus type 2 247 

Other research shows that type 2 diabetes causes obesity as an effect of the changes in metabolism and other 

deranged cell behavior attendant on insulin resistance. 

However, environmental factors (almost certainly diet and weight) play a large part in the development of type 2 
diabetes in addition to any genetic component. This can be seen from the adoption of the type 2 diabetes 
epidemiological pattern in those who have moved to a different environment as compared to the same genetic pool 

who have not. Immigrants to Western developed countries, for instance, as compared to lower incidence countries of 

• • [28] 

There is a stronger inheritance pattern for type 2 diabetes. Those with first-degree relatives with type 2 diabetes have 
a much higher risk of developing type 2 diabetes, increasing with the number of those relatives. Concordance among 
monozygotic twins is close to 100%, and about 25% of those with the disease have a family history of diabetes. 
Genes significantly associated with developing type 2 diabetes, include TCF7L2, PPARG, FTO, KCNJ11, NOTCH2, 
WFS1, CDKAL1, IGF2BP2, SLC30A8, JAZF1, and HHEX [29] KCNJ11 (potassium inwardly rectifying channel, 
subfamily J, member 11), encodes the islet ATP-sensitive potassium channel Kir6.2, and TCF7L2 (transcription 

factor 7— like 2) regulates proglucagon gene expression and thus the production of glucagon-like peptide- 1. 

Moreover, obesity (which is an independent risk factor for type 2 diabetes) is strongly inherited. 

Monogenic forms, e.g., MODY, constitute 1—5 % of all cases. 

Various hereditary conditions may feature diabetes, for example myotonic dystrophy and Friedreich's ataxia. 

Wolfram's syndrome is an autosomal recessive neurodegenerative disorder that first becomes evident in childhood. It 

consists of diabetes insipidus, diabetes mellitus, optic atrophy, and deafness, hence the acronym DIDMOAD. 

Gene expression promoted by a diet of fat and glucose as well as high levels of inflammation related cytokines found 
in the obese results in cells that "produce fewer and smaller mitochondria than is normal," and are thus prone to 
insulin resistance. 


Some drugs, used for any of several conditions, can interfere with the insulin regulation system, possibly producing 
drug induced hyperglycemia. Some examples follow, giving the biochemical mechanism in each case: 

Atypical Antipsychotics - Alter receptor binding characteristics, leading to increased insulin resistance. 

Beta-blockers - Inhibit insulin secretion. 

Calcium Channel Blockers - Inhibits secretion of insulin by interfering with cytosolic calcium release. 

Corticosteroids - Cause peripheral insulin resistance and gluconeogensis. 

Fluoroquinolones - Inhibits insulin secretion by blocking ATP sensitive potassium channels. 

Niacin - causes increased insulin resistance due to increased free fatty acid mobilization. 

Phenothiazines - Inhibit insulin secretion. 

Protease Inhibitors - Inhibit the conversion of proinsulin to insulin. 

Somatropin - May decrease sensitivity to insulin, especially in those susceptible. 

Thiazide Diuretics - Inhibit insulin secretion due to hypokalemia. They also cause increased insulin resistance due 

to increased free fatty acid mobilization. 

Diabetes mellitus type 2 248 


Insulin resistance means that body cells do not respond appropriately when insulin is present. Unlike type 1 diabetes 
mellitus, insulin resistance is generally "post-receptor", meaning it is a problem with the cells that respond to insulin 
rather than a problem with the production of insulin. 

This is a more complex problem than type 1, but is sometimes easier to treat, especially in the early years when 
insulin is often still being produced internally. Severe complications can result from improperly managed type 2 
diabetes, including renal failure, erectile dysfunction, blindness, slow healing wounds (including surgical incisions), 
and arterial disease, including coronary artery disease. The onset of type 2 diabetes has been most common in middle 
age and later life, although it is being more frequently seen in adolescents and young adults due to an increase in 
child obesity and inactivity. A type of diabetes called MODY is increasingly seen in adolescents, but this is 
classified as a diabetes due to a specific cause and not as type 2 diabetes. 

Diabetes mellitus with a known etiology, such as secondary to other diseases, known gene defects, trauma or 
surgery, or the effects of drugs, is more appropriately called secondary diabetes mellitus or diabetes due to a specific 
cause. Examples include diabetes mellitus such as MODY or those caused by hemochromatosis, pancreatic 
insufficiencies, or certain types of medications (e.g., long-term steroid use). 


2006 WHO Diabetes diagnosis criteria 



2 hour glucose 

Fasting glucose 




<7.8 (<140) 

<6.1 (<110) 

Diabetes mellitus 

>11.1 (>200) 


The World Health Organization definition of diabetes is for a single raised glucose reading with symptoms, 
otherwise raised values on two occasions, of either: 

• fasting plasma glucose > 7.0 mmol/1 (126 mg/dl) 


• With a glucose tolerance test, two hours after the oral dose a plasma glucose > 1 1 . 1 mmol/1 (200 mg/dl) 

Accuracy of tests for early detection 

If a 2-hour postload glucose level of at least 11.1 mmol/L (> 200 mg/dL) is used as the reference standard, the 
fasting plasma glucose > 7.0 mmol/L (126 mg/dL) diagnoses current diabetes with : 

• sensitivity about 50% 

• specificity greater than 95% 


A random capillary blood glucose > 6.7 mmol/L (120 mg/dL) diagnoses current diabetes with : 

• sensitivity = 75% 

• specificity = 88% 

Glycosylated hemoglobin values that are elevated (over 5%), but not in the diabetic range (not over 7.0%) are 

predictive of subsequent clinical diabetes in United States female health professionals. In this study, 177 of 1061 

patients with glycosylated hemoglobin value less than 6% became diabetic within 5 years compared to 282 of 2628 1 

patients with a glycosylated hemoglobin value of 6.0% or more. This equates to a glycosylated hemoglobin value of 

6.0% or more having: 

Diabetes mellitus type 2 249 

• sensitivity = 16.7% 

• specificity = 98.9% 

Benefit of early detection 

Since publication of the USPSTF statement, a randomized controlled trial of prescribing acarbose to patients with 
"high-risk population of men and women between the ages of 40 and 70 years with a body mass index (BMI), 
calculated as weight in kilograms divided by the square of height in meters, between 25 and 40. They were eligible 
for the study if they had IGT according to the World Health Organization criteria, plus impaired fasting glucose (a 
fasting plasma glucose concentration of between 100 and 140 mg/dL or 5.5 and 7.8 mmol/L) found a number needed 
to treat of 44 (over 3.3 years) to prevent a major cardiovascular event. 

T411 T421 T431 

Other studies have shown that lifestyle changes, orlistat and metformin can delay the onset of diabetes. 


Diabetes screening is recommended for many people at various stages of life, and for those with any of several risk 
factors. The screening test varies according to circumstances and local policy, and may be a random blood glucose 
test, a fasting blood glucose test, a blood glucose test two hours after 75 g of glucose, or an even more formal 
glucose tolerance test. Many healthcare providers recommend universal screening for adults at age 40 or 50, and 
often periodically thereafter. Earlier screening is typically recommended for those with risk factors such as obesity, 
family history of diabetes, history of gestational diabetes, high-risk ethnicity (Hispanic, Native American, 

[441 [451 

Afro-Caribbean, Pacific Islander, or Maori). 

Many medical conditions are associated with diabetes and warrant screening. A partial list includes: subclinical 

T191 T221 

Cushing's syndrome, testosterone deficiency, high blood pressure, past gestational diabetes, polycystic ovary 

syndrome, chronic pancreatitis, fatty liver, cystic fibrosis, several mitochondrial neuropathies and myopathies (such 

as MIDD), myotonic dystrophy, Friedreich's ataxia, some of the inherited forms of neonatal hyperinsulinism. The 

risk of diabetes is higher with chronic use of several medications, including long term corticosteroids, some 

chemotherapy agents (especially L-asparaginase), as well as some of the antipsychotics and mood stabilizers 

(especially phenothiazines and some atypical antipsychotics). 

People with a confirmed diagnosis of diabetes are tested routinely for complications. This includes yearly urine 
testing for microalbuminuria and examination of the retina of the eye for retinopathy. 


Onset of type 2 diabetes can often be delayed through proper nutrition and regular exercise. 

Interest has arisen in preventing diabetes due to research on the benefits of treating patients before overt diabetes. 
Although the U.S. Preventive Services Task Force concluded that "the evidence is insufficient to recommend for or 
against routinely screening asymptomatic adults for type 2 diabetes, impaired glucose tolerance, or impaired fasting 
glucose," this was a grade I recommendation when published in 2003. However, the USPSTF does 

recommend screening for diabetics in adults with hypertension or hyperlipidemia. 

In 2005, an evidence report by the Agency for Healthcare Research and Quality concluded that "there is evidence 
that combined diet and exercise, as well as drug therapy (metformin, acarbose), may be effective at preventing 
progression to diabetes in IGT subjects". 

Milk has also been associated with the prevention of diabetes. A questionnaire study was done by Choi et al. of 
41,254 men which including a twelve year follow up showed this association. In this study, it was found that diets 

high in low -fat dairy might lower the risk of type 2 diabetes in men. Even though these benefits are being considered 

i T521 

linked to milk consumption, the effect of diet is only one factor that is affecting the body s overall health. 

Diabetes mellitus type 2 250 


Type 2 diabetes risk can be reduced in many cases by making changes in diet and increasing physical activity. 

The American Diabetes Association (ADA) recommends maintaining a healthy weight, getting at least 2Vi hours 

of exercise per week (several brisk sustained walks appear sufficient), having a modest fat intake, and eating 

sufficient fiber (e.g., from whole grains). 

There is inadequate evidence that eating foods of low glycemic index is clinically helpful despite recommendations 
and suggested diets emphasizing this approach. 

Diets that are very low in saturated fats reduce the risk of becoming insulin resistant and diabetic. Study 

group participants whose "physical activity level and dietary, smoking, and alcohol habits were all in the low-risk 

group had an 82% lower incidence of diabetes." In another study of dietary practice and incidence of diabetes, 

"foods rich in vegetable oils, including non-hydrogenated margarines, nuts, and seeds, should replace foods rich in 

saturated fats from meats and fat-rich dairy products. Consumption of partially hydrogenated fats should be 


There are numerous studies which suggest connections between some aspects of type 2 diabetes with ingestion of 
certain foods or with some drugs. Breastfeeding may also be associated with the prevention of type 2 diabetes in 


Some studies have shown delayed progression to diabetes in predisposed patients through prophylactic use of 
metformin, rosiglitazone, or valsartan. In patients on hydroxychloroquine for rheumatoid arthritis, 
incidence of diabetes was reduced by 77% though causal mechanisms are unclear. Lifestyle interventions are 
however more effective than metformin at preventing diabetes regardless of weightloss. 


Left untreated, type 2 diabetes is a chronic, progressive condition, but there are well-established treatments which 
can delay or prevent entirely the formerly inevitable consequences of the condition. Often, the condition is viewed as 
progressive since poor management of blood sugar leads to a myriad of steadily worsening complications. However, 
if blood sugar is properly maintained, then the condition is, in a limited sense, cured - that is, patients are at no 
heightened risk for neuropathy, blindness, or any other high blood sugar complication, though the underlying isssue, 
a tendency to hyperglycemia has not been addressed directly. A study at UCLA in 2005 showed that the Pritikin 
Program of diet and exercise brought dramatic improvement to a group of diabetics and pre-diabetics in only three 
weeks, so that about half no longer met the criteria for the condition. 

There are two main goals of treatment: 

1 . reduction of mortality and concomitant morbidity (from assorted diabetic complications) 

2. preservation of quality of life 

The first goal can be achieved through close glycemic control (i.e., to near 'normal' blood glucose levels); the 
reduction in severity of diabetic side effects has been very well demonstrated in several large clinical trials and is 
established beyond controversy. The second goal is often addressed (in developed countries) by support and care 
from teams of diabetic health workers (usually physician, PA, nurse, dietitian or a certified diabetic educator). 
Endocrinologists, family practitioners, and general internists are the physician specialties most likely to treat people 
with diabetes. Knowledgeable patient participation is vital to clinical success, and so patient education is a crucial 
aspect of this effort. 

Type 2 diabetes is initially treated by adjustments in diet and exercise, and by weight loss, most especially in obese 
patients. The amount of weight loss which improves the clinical picture is sometimes modest (2—5 kg or 4.4-11 lb); 
this is almost certainly due to currently poorly understood aspects of fat tissue activity, for instance chemical 

Diabetes mellitus type 2 25 1 

signaling (especially in visceral fat tissue in and around abdominal organs). In many cases, such initial efforts can 
substantially restore insulin sensitivity. In some cases strict diet can adequately control the glycemic levels. 

Diabetes education is an integral component of medical care. 


Treatment goals for type 2 diabetic patients are related to effective control of blood glucose, blood pressure and 
lipids to minimize the risk of long-term consequences associated with diabetes. They are suggested in clinical 
practice guidelines released by various national and international diabetes agencies. 

The targets are: 

• Hh of6% [66] to7.0% [67] 

• Preprandial blood glucose: 4.0 to 6.0 mmol/L (72 to 108 mg/dl) 

• 2-hour postprandial blood glucose: 5.0 to 8.0 mmol/L (90 to 144 mg/dl) 

In older patients, clinical practice guidelines by the American Geriatrics Society states "for frail older adults, persons 
with life expectancy of less than 5 years, and others in whom the risks of intensive glycemic control appear to 
outweigh the benefits, a less stringent target such as Hb of 8% is appropriate". 

Lifestyle modification 


In September 2007, a joint randomized controlled trial by the University of Calgary and the University of Ottawa 
found that "Either aerobic or resistance training alone improves glycemic control in type 2 diabetes, but the 
improvements are greatest with combined aerobic and resistance training than either alone." The combined 

program reduced the HbAlc by 0.5 percentage point. Other studies have established that the amount of exercise 
needed is not large or extreme, but must be consistent and continuing. Examples might include a brisk 45 minute 
walk every other day. 

Theoretically, exercise does have benefits in that exercise would stimulate the release of certain ligands that cause 
GLUT4 to be released from internal endosomes to the cell membrane. Insulin though, which no longer works 
effectively in those afflicted with type 2 diabetes, causes GLUT1 to be placed into the membrane. Exercise also 
allows for the uptake of glucose independently of insulin, i.e. by adrenaline. 

Dietary management 

Modifying the diet to limit and control glucose (or glucose equivalent, e.g., starch) intake, and in consequence, blood 
glucose levels, is known to assist type 2 patients, especially early in the course of the condition's progression. 
Additionally, weight loss is recommended and is often helpful in persons suffering from type 2 diabetes (see above). 

Monitoring of blood glucose 

Self-monitoring of blood glucose may not improve outcomes in some cases, especially those of "reasonably well 

controlled non-insulin treated patients with type 2 diabetes". Nevertheless, it is very strongly recommended for 

patients in whom it can assist in maintaining proper glycemic control , and is well worth the cost (sometimes 

considerable) if it does. It is the only source of current information on the glycemic state of the body, as changes are 

rapid and frequent, depending on food, exercise, and medication (dosage and timing with respect to both diet and 

exercise), and secondarily, on time of day, stress (mental and physical), infection, etc. However, patient adherence to 

self-monitoring routines is often sporadic and prone to fluctuation, with patients often self-monitoring very regularly 

near to check-up times and very little during other times. Ensuring due compliance is an ongoing issue which may 

well affect research results into efficacy and benefits of self-monitoring. 

The National Institute for Health and Clinical Excellence (NICE), UK released updated diabetes recommendations 
on 30 May 2008. They indicate that self-monitoring of blood glucose levels for people with newly diagnosed type 2 

Diabetes mellitus type 2 


diabetes should be part of a structured self-management education plan. However, a recent study found that a 
treatment strategy of intensively lowering blood sugar levels (below 6%) in patients with additional cardiovascular 
disease risk factors poses more harm than benefit, and so there appear to be limits to benefit of intensive blood 
glucose control in some patients 

[75] [76] 

Metformin 500mg tablets 


There are several drugs available for type 2 diabetics — most are 
unsuitable or even dangerous for use by type 1 diabetics. They fall 
into several classes and are not equivalent, nor can they be simply 
substituted one for another. All are prescription drugs. 

One of the most widely used drugs now used for type 2 diabetes is 

the biguanide metformin; it works primarily by reducing liver 

release of blood glucose from glycogen stores and secondarily by 

provoking some increase in cellular uptake of glucose in body 

tissues. Metformin also reduces insulin resistance and is preferred 

in obese patients as it promotes weight loss. Both historically and 

currently the most commonly used drugs are in the Sulfonylurea 

group, of which several members (including glibenclamide and 

gliclazide) are widely used; these increase glucose stimulated insulin secretion by the pancreas and so lower blood 

glucose even in the face of insulin resistance. Their chief adverse effect is increased chance of hypoglycemic 


Newer drug classes include: 

• Thiazolidinediones (TZDs) (rosiglitazone, pioglitazone, and troglitazone — the last, as Rezulin, was withdrawn 
from the US market because of an increased risk of systemic acidosis). These increase tissue insulin sensitivity by 
affecting gene expression 

• a-glucosidase inhibitors (acarbose and miglitol) which interfere with absorption of some glucose containing 
nutrients, reducing (or at least slowing) the amount of glucose absorbed 

• Meglitinides which stimulate insulin release (nateglinide, repaglinide, and their analogs) quickly; they can be 
taken with food, unlike the sulfonylureas which must be taken prior to food (sometimes some hours before, 
depending on the drug) 

• Peptide analogs which work in a variety of ways: 

• Incretin mimetics which increase insulin output from the beta cells among other effects. These includes the 
Glucagon-like peptide (GLP) analog exenatide, sometimes referred to as lizard spit as it was first identified in 
Gila monster saliva 

• Dipeptidyl peptidase-4 (DPP-4) inhibitors increase Incretin levels (sitagliptin) by decreasing their deactivation 

• Amylin agonist analog, which slows gastric emptying and suppresses glucagon (pramlintide) 


A systematic review of randomized controlled trials found that metformin and second-generation sulfonylureas are 


the preferred choices for most with type 2 diabetes, especially those early in the course of the condition. Failure 
of response after a time is not unknown with most of these agents: the initial choice of anti-diabetic drug has been 
compared in a randomized controlled trial which found "cumulative incidence of monotherapy failure at 5 years to 


be 15% with rosiglitazone, 21% with metformin, and 34% with glyburide". Of these, rosiglitazone users showed 


more weight gain and edema than did non-users. Rosiglitazone may increase risk of death from cardiovascular 

causes though the causal connection is unclear. Pioglitazone and rosiglitazone may also increase the risk of 

Diabetes mellitus type 2 253 



For patients who also have heart failure, metformin may be the best tolerated drug. 

The variety of available agents can be confusing, and the clinical differences among type 2 diabetes patients 
compounds the problem. At present, choice of drugs for type 2 diabetics is rarely straightforward and in most 
instances has elements of repeated trial and adjustment. 

Injectable peptide analogs 

DPP-4 inhibitors (also known as glyptins) lowered HbAlc by 0.74% (points), comparable to other antidiabetic 
drugs. GLP-1 analogs resulted in weight loss and had more gastrointestinal side effects, while DPP-4 inhibitors 
were generally weight neutral and increased risk for infection and headache, but both classes appear to present an 
alternative to other antidiabetic drugs. However, weight gain and/or hypoglycaemia have been observed when DPP-4 


inhibitors were used with sulfonylureas; effect on long-term health and morbidity rates are still unknown. 


In rare cases, if antidiabetic drugs fail (i.e., the clinical benefit stops), insulin therapy may be necessary — usually in 
addition to oral medication therapy — to maintain normal or near normal glucose levels. 

Typical total daily dosage of insulin is 0.6 U/kg. But, of course, best timing and indeed total amounts depend on 

diet (composition, amount, and timing) as well the degree of insulin resistance. More complicated estimations to 

guide initial dosage of insulin are: 

• For men, [(fasting plasma glucose [mmol/liter]— 5)x2] x (weight [kg]H-(14.3xheight [m])— height [m]) 

• For women, [(fasting plasma glucose [mmol/liter]— 5)x2] x (weight [kg]H-(13.2xheight [m])— height [m]) 


The initial insulin regimen are often chosen based on the patient's blood glucose profile. Initially, adding nightly 
insulin to patients failing oral medications may be best. Nightly insulin combines better with metformin than with 
sulfonylureas. The initial dose of nightly insulin (measured in IU/d) should be equal to the fasting blood glucose 
level (measured in mmol/L). If the fasting glucose is reported in mg/dl, multiply by 0.05551 to convert to 

When nightly insulin is insufficient, choices include: 

• Premixed insulin with a fixed ratio of short and intermediate acting insulin; this tends to be more effective than 

T911 T921 r931 
long acting insulin, but is associated with increased hypoglycemia. Initial total daily dosage of biphasic 

insulin can be 10 units if the fasting plasma glucose values are less than 180 mg/dl or 12 units when the fasting 

T921 TRR1 

plasma glucose is above 180 mg/dl". A guide to titrating fixed ratio insulin is available. 

• Long acting insulins such as insulin glargine and insulin detemir. A meta-analysis of randomized controlled trials 

by the Cochrane Collaboration found "only a minor clinical benefit of treatment with long-acting insulin 

analogues for patients with diabetes mellitus type 2". More recently, a randomized controlled trial found that 

although long acting insulins were less effective, they were associated with reduced hypoglycemic episodes. 

• Insulin Pump therapy in type 2 diabetes is gradually becoming popular.In an original published study, in addition 
to reduction of blood sugars, there is evidence of profound benefits in resistant neuropathic pain and also 
improvements in sexual performance. 

Gastric bypass surgery 

Gastric Bypass procedures are currently considered an elective procedure with no universally accepted algorithm to 
decide who should have the surgery. In the diabetic patient, certain types result in 99-100% prevention of insulin 
resistance and 80-90% clinical resolution or remission of type 2 diabetes. In 1991, the NIH (National Institutes of 
Health) Consensus Development Conference on Gastrointestinal Surgery for Obesity proposed that the body mass 
index (BMI) threshold to consider surgery should drop from 40 to 35 in the appropriate patient. More recently, the 
American Society for Bariatric Surgery (ASBS) and the ASBS Foundation suggested that the BMI threshold be 

Diabetes mellitus type 2 254 

lowered to 30 in the presence of severe co-morbidities. Debate has flourished about the role of gastric bypass 

surgery in type 2 diabetics since the publication of The Swedish Obese Subjects Study. The largest prospective series 

showed a large decrease in the occurrence of type 2 diabetes in the post-gastric bypass patient at both 2 years (odds 

ratio was 0.14) and at 10 years (odds ratio was 0.25). 

A study of 20-years of Greenville (US) gastric bypass patients found that 80% of those with type 2 diabetes before 
surgery no longer required insulin or oral agents to maintain normal glucose levels. Weight loss occurred rapidly in 
many people in the study who had had the surgery. The 20% who did not respond to bypass surgery were, typically, 
those who were older and had had diabetes for over 20 years. 

In January 2008, the Journal of the American Medical Association (JAMA) published the first randomized 

controlled trial comparing the efficacy of laparoscopic adjustable gastric banding against conventional medical 

therapy in the obese patient with type 2 diabetes. Laparoscopic Adjustable Gastric Banding results in remission of 

type 2 diabetes among affected patients diagnosed within the previous two years according to a randomized 

controlled trial. The relative risk reduction was 69.0%. For patients at similar risk to those in this study (87.0% 

had type 2), this leads to an absolute risk reduction of 60%. 1.7 patients must be treated for one to benefit (number 

needed to treat = 1.7). Click here to adjust these results for patients at higher or lower risk of type 2 diabetes. 

These results have not yet produced a clinical standard for surgical treatment of type 2 diabetes, as the mechanism, if 
any, is currently obscure. Surgical cure of type 2 diabetes must be, as a result, considered currently experimental. 


There are an estimated 23.6 million people in the United States (7.8% of the population) with diabetes with 17.9 
million being diagnosed, 90% of whom are type 2. With prevalence rates doubling between 1990 and 2005, 
CDC has characterized the increase as an epidemic. Traditionally considered a disease of adults, type 2 diabetes 
is increasingly diagnosed in children in parallel to rising obesity rates due to alterations in dietary patterns as 

well as in life styles during childhood. 

About 90—95% of all North American cases of diabetes are type 2, and about 20% of the population over the age 
of 65 has type 2 diabetes. The incidence of type 2 diabetes in other parts of the world varies substantially, almost 
certainly because of environmental and lifestyle factors, though these are not known in detail. Diabetes affects over 
150 million people worldwide and this number is expected to double by 2025. 


[I] "Diabetes Blue Circle Symbol" ( International Diabetes Federation. 17 March 2006. . 



[5] http://emedicine.medscape.eom/article/l 17853-overview 


[7] Robbins and Cotran, Pathologic Basis of Disease, 7th Ed. pp 1 194-1 195. 

[8] Brian J. Welch, MD and Ivana Zib, MD: Case Study: Diabetic Ketoacidosis in Type 2 Diabetes: "Look Under the Sheets" (http://clinical., Clinical Diabetes, October 2004, vol. 22 no. 4, 198-200 
[9] Cooke DW, Plotnick L (November 2008). "Type 1 diabetes mellitus in pediatrics". Pediatr Rev 29 (1 1): 374-84; quiz 385. 

doi:10.1542/pir.29-l 1-374. PMID 18977856. 
[10] Riserus U, Willett WC, Hu FB (January 2009). "Dietary fats and prevention of type 2 diabetes" ( 

articlerender.fcgi?tool=pmcentrez&artid=2654180). Progress in Lipid Research 48 (1): 44-51. doi:10.1016/j.plipres.2008. 10.002. 

PMID 19032965. PMC 2654180. 

[II] Masters SL, Dunne A, Subramanian SL, Hull RL, Tannahill GM, Sharp FA et al. (2010). "Activation of the NLRP3 inflammasome by islet 
amyloid polypeptide provides a mechanism for enhanced IL-lp in type 2 diabetes." ( 
fcgi?dbfrom=pubmed& Nat Immunol 11 (10): 897-904. 
doi:10.1038/ni.l935. PMID 20835230. . 

Diabetes mellitus type 2 255 

[12] Mozaffarian D, Kamineni A, Carnethon M, Djousse L, Mukamal KJ, Siscovick D (April 2009). "Lifestyle risk factors and new-onset 

diabetes mellitus in older adults: the cardiovascular health study" ( 

artid=2828342). Archives of Internal Medicine 169 (8): 798-807. doi:10.1001/archinternmed.2009.21. PMID 19398692. PMC 2828342. 
[13] Centers for Disease Control and Prevention (CDC) (November 2004). "Prevalence of overweight and obesity among adults with diagnosed 

diabetes — United States, 1988-1994 and 1999-2002" ( MMWR. 

Morbidity and Mortality Weekly Report 53 (45): 1066-8. PMID 15549021.. 
[14] Arlan Rosenbloom, Janet H Silverstein (2003). Type 2 Diabetes in Children and Adolescents: A Clinician's Guide to Diagnosis, 

Epidemiology, Pathogenesis, Prevention, and Treatment. American Diabetes Association, U.S.. pp. 1. ISBN 978-1580401555. 
[15] Lang IA, Galloway TS, Scarlett A, et al. (September 2008). "Association of urinary bisphenol A concentration with medical disorders and 

laboratory abnormalities in adults". JAMA 300 (11): 1303-10. doi:10.1001/jama.300.11.1303. PMID 18799442. 
[16] Jack L, Boseman L, Vinicor F (April 2004). "Aging Americans and diabetes. A public health and clinical response". Geriatrics 59 (4): 14—7. 

PMID 15086069. 
[17] Lovejoy JC (October 2002). "The influence of dietary fat on insulin resistance". Curr. Diab. Rep. 2 (5): 435^-0. 

doi:10.1007/sll892-002-0098-y. PMID 12643169. 
[18] Hu FB (February 2003). "Sedentary lifestyle and risk of obesity and type 2 diabetes". Lipids 38 (2): 103-8. doi:10.1007/sl 1745-003-1038-4. 

PMID 12733740. 
[19] Iwasaki Y, Takayasu S, Nishiyama M, et al. (March 2008). "Is the metabolic syndrome an intracellular Cushing state? Effects of multiple 

humoral factors on the transcriptional activity of the hepatic glucocorticoid-activating enzyme (llbeta-hydroxysteroid dehydrogenase type 1) 

gene". Molecular and Cellular Endocrinology 285 (1-2): 10-8. doi: 10.1016/j.mce.2008.01.012. PMID 18313835. 
[20] Chiodini I, Torlontano M, Scillitani A, et al. (December 2005). "Association of subclinical hypercortisolism with type 2 diabetes mellitus: a 

case-control study in hospitalized patients". European Journal of Endocrinology 153 (6): 837—44. doi:10.1530/eje. 1.02045. PMID 16322389. 
[21] Taniguchi T, Hamasaki A, Okamoto M (May 2008). "Subclinical hypercortisolism in hospitalized patients with type 2 diabetes mellitus" 

( Endocrine Journal 55 (2): 429-32. 

doi:10.1507/endocrj.K07E-045. PMID 18362453. . 
[22] Saad F, Gooren L (March 2009). "The role of testosterone in the metabolic syndrome: a review". The Journal of Steroid Biochemistry and 

Molecular Biology 114 (1-2): 40-3. doi:10.1016/j.jsbmb.2008.12.022. PMID 19444934. 
[23] Farrell JB, Deshmukh A, Baghaie AA (2008). "Low testosterone and the association with type 2 diabetes". The Diabetes Educator 34 (5): 

799-806. doi:10.1177/0145721708323100. PMID 18832284. 
[24] Sakagashira S, Sanke T, Hanabusa T, et al. (September 1996). "Missense mutation of amylin gene (S20G) in Japanese NIDDM patients". 

Diabetes 45 (9): 1279-81. doi: 10.2337/diabetes.45.9. 1279. PMID 8772735. 
[25] Cho YM, Kim M, Park KS, Kim SY, Lee HK (May 2003). "S20G mutation of the amylin gene is associated with a lower body mass index 

in Korean type 2 diabetic patients" ( Diabetes Res. Clin. Pract. 60 (2): 

125-9. doi:10.1016/S0168-8227(03)00019-6. PMID 12706321. . Retrieved 19 July 2008. 
[26] Eberhart, M. S.; Ogden, C, Engelgau, M, Cadwell, B, Hedley, A. A., Saydah, S. H., (November 2004). "Prevalence of Overweight and 

Obesity Among Adults with Diagnosed Diabetes — United States, 1988--1994 and 1999—2002" ( 

mmwrhtml/mm5 345a2.htm). Morbidity and Mortality Weekly Report (Centers for Disease Control and Prevention) 53 (45): 1066—8. 

PMID 15549021. . Retrieved 19 July 2008. 
[27] Camastra S, Bonora E, Del Prato S, Rett K, Week M, Ferrannini E (December 1999). "Effect of obesity and insulin resistance on resting and 

glucose-induced thermogenesis in man. EGIR (European Group for the Study of Insulin Resistance)". Int. J. Obes. Relat. Metab. Disord. 23 

(12): 1307-13. doi:10.1038/sj.ijo.0801072. PMID 10643689. 
[28] Cotran, Kumar, Collins; Robbins Pathologic Basis of Disease, Saunders Sixth Edition, 1999; 913-926. 
[29] Lyssenko V, Jonsson A, Almgren P, et al. (November 2008). "Clinical risk factors, DNA variants, and the development of type 2 diabetes". 

The New England Journal of Medicine 359 (21): 2220-32. doi:10.1056/NEJMoa0801869. PMID 19020324. 
[30] Rother KI (April 2007). "Diabetes treatment — bridging the divide". The New England Journal of Medicine 356 (15): 1499-501. 

doi:10.1056/NEJMp078030. PMID 17429082. 
[31] Walley AJ, Blakemore Al, Froguel P (October 2006). "Genetics of obesity and the prediction of risk for health". Human Molecular Genetics 

15 Spec No 2: R124-30. doi:10.1093/hmg/ddl215. PMID 16987875. 
[32] "Monogenic Forms of Diabetes: Neonatal Diabetes Mellitus and Maturity-onset Diabetes of the Young" (http://www.diabetes.niddk.nih. 

gov/dm/pubs/mody/). National Diabetes Information Clearinghouse (NDIC) (National Institute of Diabetes and Digestive and Kidney 

Diseases, NIH). . Retrieved 2008-08-04. 
[33] Barrett TG (September 2001). "Mitochondrial diabetes, DIDMOAD and other inherited diabetes syndromes". Best Practice & Research. 

Clinical Endocrinology & Metabolism 15 (3): 325^3. doi:10.1053/beem.2001.0149. PMID 11554774. 
[34] "The origin of diabetes Don't blame your genes They may simply be getting bad instructions — from you" ( 

sciencetechnology/displayStory.cfm?story_id=14350157). Economist. 2009. . 
[35] "" ( and diagnosis ofdiabetes_new.pdf) (pdf). World Health 

Organization. . 
[36] World Health Organization. "Definition, diagnosis and classification of diabetes mellitus and its complications: Report of a WHO 

Consultation. Part 1. Diagnosis and classification of diabetes mellitus" ( . Retrieved 29 

May 2007. 

Diabetes mellitus type 2 256 

[37] Harris R, Donahue K, Rathore SS, Frame P, Woolf SH, Lohr KN (February 2003). "Screening adults for type 2 diabetes: a review of the 

evidence for the U.S. Preventive Services Task Force" (http://www. annals. org/cgi/pmidlookup?view=long&pmid=12558362). Ann. 

Intern. Med. 138 (3): 215-29. PMID 12558362. . Retrieved 19 July 2008. 
[38] RolkaDB, Narayan KM, Thompson TJ, et al. (2001). "Performance of recommended screening tests for undiagnosed diabetes and 

dysglycemia". Diabetes Care 24 (11): 1899-903. doi:10.2337/diacare.24.11.1899. PMID 11679454. 
[39] Pradhan AD, Rifai N, Buring JE, Ridker PM (2007). "Hemoglobin Ale predicts diabetes but not cardiovascular disease in nondiabetic 

women" (http://www. pubmedcentral. ?tool=pmcentrez&artid=2585540). Am. J. Med. 120 (8): 720—7. 

doi:10.1016/j.amjmed.2007.03.022. PMID 17679132. PMC 2585540. 
[40] Chiasson JL, Josse RG, Gomis R, Hanefeld M, Karasik A, Laakso M (July 2003). "Acarbose treatment and the risk of cardiovascular disease 

and hypertension in patients with impaired glucose tolerance: the STOP-NIDDM trial" ( 

pmidlookup?view=long&pmid=12876091). JAMA 290 (4): 486-94. doi:10.1001/jama.290.4.486. PMID 12876091. . Retrieved 19 July 2008. 
[41] Lindstrom J, Ilanne-Parikka P, Peltonen M, et al. (November 2006). "Sustained reduction in the incidence of type 2 diabetes by lifestyle 

intervention: follow-up of the Finnish Diabetes Prevention Study" ( 

Lancet 368 (9548): 1673-9. doi:10.1016/S0140-6736(06)69701-8. PMID 17098085. . Retrieved 19 July 2008. 
[42] Torgerson JS, Hauptman J, Boldrin MN, Sjostrom L (January 2004). "XENical in the prevention of diabetes in obese subjects (XENDOS) 

study: a randomized study of orlistat as an adjunct to lifestyle changes for the prevention of type 2 diabetes in obese patients" (http://care. ?view=long&pmid=14693982). Diabetes Care 27 (1): 155—61. doi:10.2337/diacare. 27. 1.155. 

PMID 14693982. . Retrieved 19 July 2008. 
[43] Knowler WC, Barrett-Connor E, Fowler SE, et al. (February 2002). "Reduction in the incidence of type 2 diabetes with lifestyle intervention 

or metformin" (http://content.nejm. org/cgi/pmidlookup?view=short&pmid=11832527&promo=ONFLNS19). N. Engl. J. Med. 346 (6): 

393-403. doi:10.1056/NEJMoa012512. PMID 11832527. PMC 1370926. . Retrieved 19 July 2008. 
[44] Lee CM, Huxley RR, Lam TH, et al. (2007). "Prevalence of diabetes mellitus and population attributable fractions for coronary heart disease 

and stroke mortality in the WHO South-East Asia and Western Pacific regions". Asia Pacific Journal of Clinical Nutrition 16 (1): 187—92. 

PMID 17215197. 
[45] Seidell JC (March 2000). "Obesity, insulin resistance and diabetes — a worldwide epidemic". The British Journal of Nutrition 83 Suppl 1: 

S5-8. doi:10.1017/S000711450000088X. PMID 10889785. 
[46] Raina Elley C, Kenealy T (December 2008). "Lifestyle interventions reduced the long-term risk of diabetes in adults with impaired glucose 

tolerance". EvidBasedMed 13 (6): 173. doi:10.1136/ebm.l3.6.173. PMID 19043031. 
[47] U.S. Preventive Services Task Force (February 2003). "Screening for type 2 diabetes mellitus in adults: recommendations and rationale" 

( Ann. Intern. Med. 138 (3): 212^. PMID 12558361. . Retrieved 

19 July 2008. 
[48] Grade I recommendation ( 
[49] grade B recommendation ( 
[50] evidence report ( 
[51] Santaguida PL, Balion C, Hunt D, et al. (August 2005). "Diagnosis, prognosis, and treatment of impaired glucose tolerance and impaired 

fasting glucose" ( (PDF). EvidRep Technol Assess 

(Summ) (128): 1-11. PMID 16194123. . Retrieved 19 July 2008. 
[52] Choi HK, Willett WC, Stampfer P, Vasson MP, Maubois JL, Beaufrere B (2005). "Dairy consumption and risk of type 2 diabetes mellitus in 

men". Archives of Internal Medicine 165 (9): 997-1003. doi:10.1001/archinte.l65.9.997. PMID 15883237. 
[53] Knowler WC, Barrett-Connor E, Fowler SE, et al. (February 2002). "Reduction in the incidence of type 2 diabetes with lifestyle intervention 

or metformin" (http://www. pubmedcentral. nih. gov/ articlerender.fcgi?tool=pmcentrez&artid=l 370926). The New England Journal of 

Medicine 346 (6): 393^03. doi:10.1056/NEJMoa012512. PMID 11832527. PMC 1370926. 
[54] Diabetes Prevention Program Research Group (2009). "10-year follow-up of diabetes incidence and weight loss in the Diabetes Prevention 

Program Outcomes Study". Lancet 374 (9702): 1677-1686. doi:10.1016/S0140-6736(09)61457-4. 
[55] Bantle JP, Wylie-Rosett J, Albright AL, et al. (September 2006). "Nutrition recommendations and interventions for diabetes — 2006: a 

position statement of the American Diabetes Association". Diabetes Care 29 (9): 2140-57. doi:10.2337/dc06-9914. PMID 16936169. 
[56] Barnard, Neal (2007). " 13". Dr. Neal Barnard's Program for Reversing Diabetes: The Scientifically Proven System for Reversing Diabetes 

Without Drugs. New York, NY: Rodale/Holtzbrinck Publishers. ISBN 978-1-59486-528-2. 
[57] Barnard ND, Katcher HI, Jenkins DJ, Cohen J, Turner-McGrievy G (May 2009). "Vegetarian and vegan diets in type 2 diabetes 

management". Nutrition Reviews 67 (5): 255-63. doi:10.1111/j.l753-4887.2009.00198.x. PMID 19386029. 
[58] Stuebe AM, Rich-Edwards JW, Willett WC, Manson JE, Michels KB (November 2005). "Duration of lactation and incidence of type 2 

diabetes". JAMA 294 (20): 2601-10. doi:10.1001/jama.294.20.2601. PMID 16304074. 
[59] Gerstein HC, Yusuf S, Bosch J, et al. (September 2006). "Effect of rosiglitazone on the frequency of diabetes in patients with impaired 

glucose tolerance or impaired fasting glucose: a randomised controlled trial". Lancet 368 (9541): 1096—105. 

doi:10.1016/S0140-6736(06)69420-8. PMID 16997664. 
[60] Kjeldsen SE, Julius S, Mancia G, et al. (July 2006). "Effects of valsartan compared to amlodipine on preventing type 2 diabetes in high-risk 

hypertensive patients: the VALUE trial". Journal of Hypertension 24 (7): 1405-12. doi:10.1097/01.hjh. 0000234122.55895.5b. 

PMID 16794491. 

Diabetes mellitus type 2 257 

[61] Wasko MC, Hubert HB, Lingala VB, et al. (July 2007). "Hydroxychloroquine and risk of diabetes in patients with rheumatoid arthritis". 

JAMA 298 (2): 187-93. doi:10.1001/jama.298.2.187. PMID 17622600. 
[62] Knowler WC, Fowler SE, Hamman RF, et al. (November 2009). "10-year follow-up of diabetes incidence and weight loss in the Diabetes 

Prevention Program Outcomes Study". Lancer 374 (9702): 1677-86. doi: 10. 1016/S0140-6736(09)61457-4. PMID 19878986. 
[63] "Physical activity and dietary intervention for chronic diseases: a quick fix after all?" (http://jap.physiology.Org/cgi/content/full/100/5/ 

1439), Frank W. Booth & Manu V. Chakravarthy, J Appl Physiol, May 1, 2006; 100(5): 1439 - 1440. 
[64] Roberts CK, Won D, Pruthi S, Kurtovic S, Sindhu RK, Vaziri ND et al. (2006). "Effect of a short-term diet and exercise intervention on 

oxidative stress, inflammation, MMP-9, and monocyte chemotactic activity in men with metabolic syndrome factors." (http://www.ncbi. J Appl 

Physiol 100 (5): 1657-65. doi:10.1152/japplphysiol.01292.2005. PMID 16357066. . 
[65] "Three-week diet curbs diabetes" (, New Scientist, 13 

January 2006 by Shaoni Bhattacharya. 
[66] American Diabetes (January 2006). "Standards of medical care in diabetes— 2006" ( 

pmidlookup?view=long&pmid=16373931). Diabetes Care 29 Suppl 1: S4^2. PMID 16373931. . Retrieved 19 July 2008. 
[67] Qaseem A, Vijan S, Snow V, Cross JT, Weiss KB, Owens DK (September 2007). "Glycemic control and type 2 diabetes mellitus: the 

optimal hemoglobin Ale targets. A guidance statement from the American College of Physicians" ( 

full/147/6/417). Ann. Intern. Med. 147 (6): 417-22. PMID 17876024. . Retrieved 19 July 2008. 
[68] "Clinical Practice Guidelines" ( . Retrieved 19 July 2008. 
[69] Brown AF, Mangione CM, Saliba D, Sarkisian CA (May 2003). "Guidelines for improving the care of the older person with diabetes 

mellitus" ( 

issue=5 Suppl Guidelines&spage=S265). J Am Geriatr Soc 51 (5 Suppl Guidelines): S265-80. doi:10.1046/j.l532-5415.51.5s.l.x. 

PMID 12694461. . Retrieved 19 July 2008. 
[70] Sigal RJ, Kenny GP, Boule NG, et al. (2007). "Effects of aerobic training, resistance training, or both on glycemic control in type 2 diabetes: 

a randomized trial" (http://www.annals.Org/cgi/content/full/147/6/357). Ann. Intern. Med. 147 (6): 357-69. PMID 17876019. . 

Non-technical summary (http://www.annals.Org/cgi/content/summary/147/6/357) 
[71] Song S (17 September 2007). "Study: The Best Exercise for Diabetes" (,8599, 1662683,00. 

html?xid=newsletter- weekly). Time Inc. . Retrieved 28 September 2007. 
[72] Farmer A, Wade A, Goyder E, et al. (2007). "Impact of self monitoring of blood glucose in the management of patients with non-insulin 

treated diabetes: open parallel group randomised trial" ( 

artid=1925177). BM/335 (7611): 132. doi:10.1136/bmj.39247.447431.BE. PMID 17591623. PMC 1925177. 
[73] Lowe, Julia (October 2010). "Self-monitoring of blood glucose in type 2 diabetes" (http://www.australianprescriber.eom/magazine/33/5/ 

138/40). Australian Prescriber (33): 138-140. . 
[74] "Clinical Guideline:The management of type 2 diabetes (update)" (http://www.nice. org. uk/guidance/index.jsp?action=byID& 

[75] Gerstein, H. C, M. E. Miller, et al. (2008). "Effects of intensive glucose lowering in type 2 diabetes.". New England Journal of Medicine, 

the 358 (358(24)): 2545-59. doi:10.1056/NEJMoa0802743. PMID 18539917. 
[77] Bolen S et al. Systematic Review: Comparative Effectiveness and Safety of Oral Medications for Type 2 Diabetes Mellitus (http://www. Ann Intern Med 2007;147:6 
[78] Kahn SE, Haffner SM, Heise MA, et al. (2006). "Glycemic durability of rosiglitazone, metformin, or glyburide monotherapy". N. Engl. J. 

Med. 355 (23): 2427^3. doi:10.1056/NEJMoa066224. PMID 17145742. 
[79] "NEJM — Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes" ( 

cgi/content/full/NEJMoa072761). . Retrieved 21 May 2007. 
[80] "MedWatch - 2007 Safety Information Alerts (Actos (pioglitazone))" (http://www.fda.gOv/medwatch/safety/2007/safety07.htm#actos). 

. Retrieved 21 May 2007. 
[81] Eurich DT, McAlister FA, Blackburn DF, et al. (2007). "Benefits and harms of antidiabetic agents in patients with diabetes and heart failure: 

systematic review" ( BMJ 335 (7618): 497. 

doi:10.1136/bmj.39314.620174.80. PMID 17761999. PMC 1971204. 
[82] Amori RE, Lau J, Pittas AG (2007). "Efficacy and safety of incretin therapy in type 2 diabetes: systematic review and meta-analysis" (http:/ 

/ JAMA 298 (2): 194-206. doi:10.1001/jama.298.2.194. 

PMID 17622601. . 
[83] National Prescribing Service (August 1, 2010). "Dipeptidyl peptidase-4 inhibitors ('gliptins') for type 2 diabetes mellitus" (http://www.nps. . 
[84] Diabetes ( 

Article&clicked=true). MyOptumHealth. (Report). Retrieved Jan 21, 2010. 
[85] Diabetes and Medication ( Diabetes New Zealand. 

(Report). Retrieved Jan 21, 2010. 
[86] Yki-Jarvinen H, Ryysy L, Nikkila K, Tulokas T, Vanamo R, Heikkila M (March 1999). "Comparison of bedtime insulin regimens in patients 

with type 2 diabetes mellitus. A randomized, controlled trial" (http://www. annals. org/cgi/pmidlookup?view=long&pmid=10068412). Ann. 

Diabetes mellitus type 2 258 

Intern. Med. 130 (5): 389-96. PMID 10068412. . Retrieved 19 July 2008. 
[87] Holman RR, Turner RC (January 1985). "A practical guide to basal and prandial insulin therapy". Diabet. Med. 2 (1): 45—53. 

doi: 10.1 1 1 1/j. 1464-5491. 1985.tb00592.x. PMID 295 1066. 
[88] Mooradian AD, Bernbaum M, Albert SG (July 2006). "Narrative review: a rational approach to starting insulin therapy". Ann. Intern. Med. 

145 (2): 125-34. PMID 16847295. 
[89] Yki-Jarvinen H, Kauppila M, Kujansuu E, et al. (November 1992). "Comparison of insulin regimens in patients with non-insulin-dependent 

diabetes mellitus". N. Engl. J. Med. 327 (20): 1426-33. doi:10.1056/NEJM199211123272005. PMID 1406860. 
[90] Kratz A, Lewandrowski KB (October 1998). "Case records of the Massachusetts General Hospital. Weekly clinicopathological exercises. 

Normal reference laboratory values" (http://content.nejm. org/cgi/pmidlookup?view=short&pmid=9761809&promo=ONFLNS19). N. 

Engl. J. Med. 339 (15): 1063-72. doi:10.1056/NEJM199810083391508. PMID 9761809. . Retrieved 19 July 2008. 
[91] Holman RR, Thome KI, Farmer AJ, et al. (October 2007). "Addition of biphasic, prandial, or basal insulin to oral therapy in type 2 diabetes" 

( N. Engl. J. Med. 357 (17): 1716-30. 

doi:10.1056/NEJMoa075392. PMID 17890232. . Retrieved 19 July 2008. 
[92] Raskin P, Allen E, Hollander P, et al. (February 2005). "Initiating insulin therapy in type 2 Diabetes: a comparison of biphasic and basal 

insulin analogs" ( Diabetes Care 28 (2): 260—5. 

doi: 10.2337/diacare.28.2.260. PMID 15677776. . Retrieved 19 July 2008. 
[93] Malone JK, Kerr LF, Campaigne BN, Sachson RA, Holcombe JH (December 2004). "Combined therapy with insulin lispro Mix 75/25 plus 

metformin or insulin glargine plus metformin: a 16-week, randomized, open-label, crossover study in patients with type 2 diabetes beginning 

insulin therapy" ( Clin Ther 26 (12): 2034^4. 

doi:10.1016/j.clinthera.2004.12.015. PMID 15823767. . Retrieved 19 July 2008. 
[94] Horvath K, Jeitler K, Berghold A, et al. (2007). "Long-acting insulin analogues versus NPH insulin (human isophane insulin) for type 2 

diabetes mellitus". Cochrane Database Syst Rev (2): CD005613. doi: 10.1002/14651858.CD005613.pub3. PMID 17443605. 
[95] Jothydev Kesavadev, Shyam Balakrishnan, Ahammed S.Sunitha Jothydev, et al. (2009). "Reduction of glycosylated hemoglobin following 6 

months of continuous subcutaneous insulin infusion in an Indian population with type 2 diabetes". Diabetes Technol Ther 11 (8): 517—521. 

doi:10.1089/dia.2008.0128. PMID 19698065. 
[96] Cummings DE, Flum DR (2008). "Gastrointestinal surgery as a treatment for diabetes" ( 

pmidlookup?view=long&pmid=18212321). JAMA 299 (3): 341-3. doi:10.1001/jama.299.3.341. PMID 18212321. . 
[97] Folli F, Pontiroli AE, Schwesinger WH (2007). "Metabolic aspects of bariatric surgery" ( 

S0025-7 125(07)00006-5). Med. Clin. North Am. 91 (3): 393^14, x. doi:10.1016/j.mcna.2007.01.005. PMID 17509385. . 
[98] Gastric Bypass Surgery - Diabetes Health ( 
[99] Dixon JB, O'Brien PE, Playfair J, et al. (2008). "Adjustable gastric banding and conventional therapy for type 2 diabetes: a randomized 

controlled trial" ( JAMA 299 (3): 316—23. 

doi:10.1001/jama.299.3.316. PMID 18212316. . 
[100] http://medinformatics. shtml?calc_rx_rates.shtml?eer=27.0&cer=87.0 
[101] American Diabetes Association title =Total Prevalence of Diabetes and Pre-diabetes url =http://www. 

prevalence^ sp I accessdate =2008-11-29 
[102] Inzucchi SE, Sherwin RS, The Prevention of Type 2 Diabetes Mellitus. Endocrinol Metab Clin N Am 34 (2205) 199-219. 
[103] Gerberding, Julie Louise (2007-05-24). Diabetes ( Atlanta: Centres for 

Disease Control. . Retrieved 2007-09-14. 
[104] Diabetes rates are increasing among youth ( NIH, November 13, 2007 
[105] Steinberger J, Moran A, Hong CP, Jacobs DR, Sinaiko AR (2001). "Adiposity in childhood predicts obesity and insulin resistance in young 

adulthood." (http://www.ncbi. 

cmd=prlinks&id=l 1295707). J Pediatr 138 (4): 469-73. doi:10.1067/mpd.2001. 112658. PMID 11295707. . 
[106] Zimmet P, Alberti KG, Shaw J (December 2001). "Global and societal implications of the diabetes epidemic" ( 

nature/joumal/v414/n6865/abs/414782a.html). Nature 414 (6865): 782-7. doi:10.1038/414782a. PMID 11742409. . Retrieved 19 July 


Diabetes mellitus type 2 259 

External links 

• Diabetes mellitus type 2 ( 
Pancreas/Diabetes/Type_2/) at the Open Directory Project 

• Diabetes mellitus type 2 ( 

• Type 2 Diabetes - General Information ( 


• IDF Diabetes Atlas ( 

• International Diabetes Federation ( 

• World Diabetes Day (International Diabetes Federation) ( 

• Diabetes UK - Largest organisation in the UK working for people with diabetes ( 

• American Diabetes Association ( 


• National Diabetes Information Clearinghouse ( 

• Centers for Disease Control (Endocrine pathology) ( 

Further reading 

• Diabetes Symptoms Revisited: Are They Too Vague and Too Late? ( 

• ABC Radio National transcript on hypothesised aetiology involving gut hormone ( 


Complex Systems Biology, Genetic 
Screening and Bio statistics 

Complex Systems Biology 

Systems biology is a term used to 

describe a number of trends in 

bioscience research, and a movement 

which draws on those trends. 

Proponents describe systems biology as 

a biology-based inter-disciplinary study 

field that focuses on complex 

interactions in biological systems, 

claiming that it uses a new perspective 

(holism instead of reduction). 

Particularly from year 2000 onwards, 

the term is used widely in the 

biosciences, and in a variety of 

contexts. An often stated ambition of 

systems biology is the modeling and 

discovery of emergent properties, properties of a system whose theoretical description is only possible using 

techniques which fall under the remit of systems biology. 



Apply knowledge of 

| ] 1 ■ J 

and ihs ptiblk T\^ 

microbial rccKlimu 



^■^ Prod 

.. md 

ftil FDNCI 
.jjfl B IN MIC 






i£p ""Y 

rass^ i g 

Genas and alhe-j r 

Ki\\^ : -k u 


Many protein 

a.]' marines inters 

BET." fhrpugh c-aniplex, 

* in lenjgri netted 

polhvjoyj-. A noising 

ihese dynamic processes 

will Icod » models of lifc 

URL DOEGfliWiwilbl'Ifi.^-o 

i-lii.ii.r. i-j!-::.. ! i i y <A\-.:: wr.'r. taJCsii-Jii i'; '..'::: ■:.(:'. :;■, jWOIilir; fflQctlilHSi. ■ 


Example of systems biology research. 


Systems biology can be considered from a number of different aspects: 

• As a field of study, particularly, the study of the interactions between the components of biological systems, and 
how these interactions give rise to the function and behavior of that system (for example, the enzymes and 
metabolites in a metabolic pathway). 

• As a paradigm, usually defined in antithesis to the so-called reductionist paradigm (biological organisation), 
although fully consistent with the scientific method. The distinction between the two paradigms is referred to in 
these quotations: 

"The reductionist approach has successfully identified most of the components and many of the interactions 

but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge. ..the 

pluralism of causes and effects in biological networks is better addressed by observing, through quantitative 

measures, multiple components simultaneously and by rigorous data integration with mathematical models" 
c • [3] 


"Systems biology. about putting together rather than taking apart, integration rather than reduction. It 

requires that we develop ways of thinking about integration that are as rigorous as our reductionist 

programmes, but different. ...It means changing our philosophy, in the full sense of the term" Denis Noble 

• As a series of operational protocols used for performing research, namely a cycle composed of theory, 
analytic or computational modelling to propose specific testable hypotheses about a biological system, 

Complex Systems Biology 261 

experimental validation, and then using the newly acquired quantitative description of cells or cell processes to 
refine the computational model or theory. Since the objective is a model of the interactions in a system, the 

experimental techniques that most suit systems biology are those that are system-wide and attempt to be as 
complete as possible. Therefore, transcriptomics, metabolomics, proteomics and high-throughput techniques are 
used to collect quantitative data for the construction and validation of models. 

• As the application of dynamical systems theory to molecular biology. 

• As a socioscientific phenomenon defined by the strategy of pursuing integration of complex data about the 
interactions in biological systems from diverse experimental sources using interdisciplinary tools and personnel. 

This variety of viewpoints is illustrative of the fact that systems biology refers to a cluster of peripherally 
overlapping concepts rather than a single well-delineated field. However the term has widespread currency and 
popularity as of 2007, with chairs and institutes of systems biology proliferating worldwide. 


Systems biology finds its roots in: 

• the quantitative modeling of enzyme kinetics, a discipline that flourished between 1900 and 1970, 

• the mathematical modeling of population growth, 

• the simulations developed to study neurophysiology, and 

• control theory and cybernetics. 

One of the theorists who can be seen as one of the precursors of systems biology is Ludwig von Bertalanffy with his 
general systems theory . One of the first numerical simulations in biology was published in 1952 by the British 
neurophysiologists and Nobel prize winners Alan Lloyd Hodgkin and Andrew Fielding Huxley, who constructed a 
mathematical model that explained the action potential propagating along the axon of a neuronal cell. Their model 
described a cellular function emerging from the interaction between two different molecular components, a 
potassium and a sodium channels, and can therefore be seen as the beginning of computational systems biology. In 
1960, Denis Noble developed the first computer model of the heart pacemaker. 

The formal study of systems biology, as a distinct discipline, was launched by systems theorist Mihajlo Mesarovic in 
1966 with an international symposium at the Case Institute of Technology in Cleveland, Ohio entitled "Systems 
Theory and Biology." 

The 1960s and 1970s saw the development of several approaches to study complex molecular systems, such as the 
Metabolic Control Analysis and the biochemical systems theory. The successes of molecular biology throughout the 
1980s, coupled with a skepticism toward theoretical biology, that then promised more than it achieved, caused the 
quantitative modelling of biological processes to become a somewhat minor field. 

However the birth of functional genomics in the 1990s meant that large quantities of high quality data became 
available, while the computing power exploded, making more realistic models possible. In 1997, the group of 
Masaru Tomita published the first quantitative model of the metabolism of a whole (hypothetical) cell. 

Around the year 2000, after Institutes of Systems Biology were established in Seattle and Tokyo, systems biology 
emerged as a movement in its own right, spurred on by the completion of various genome projects, the large increase 
in data from the omics (e.g. genomics and proteomics) and the accompanying advances in high-throughput 

experiments and bioinformatics. Since then, various research institutes dedicated to systems biology have been 

developed. As of summer 2006, due to a shortage of people in systems biology several doctoral training centres in 

systems biology have been established in many parts of the world. 

Complex Systems Biology 


Overview of signal transduction pathways 

Disciplines associated with systems biology 

According to the interpretation of Systems 
Biology as the ability to obtain, integrate and 
analyze complex data from multiple 
experimental sources using interdisciplinary 
tools, some typical technology platforms are: 

• Phenomics: Organismal variation in 
phenotype as it changes during its life span.. 

• Genomics: Organismal deoxyribonucleic acid 
(DNA) sequence, including intra-organisamal 
cell specific variation, (i.e. Telomere length 
variation etc.). 

• Epigenomics / Epigenetics: Organismal and 
corresponding cell specific transcriptomic 
regulating factors not empirically coded in the 
genomic sequence, (i.e. DNA methylation, Histone Acetelation etc.). 

• Transcriptomics: Organismal, tissue or whole cell gene expression measurements by DNA microarrays or serial 
analysis of gene expression 

• Interferomics: Organismal, tissue, or cell level transcript correcting factors (i.e. RNA interference) 

• Translatomics / Proteomics: Organismal, tissue, or cell level measurements of proteins and peptides via 
two-dimensional gel electrophoresis, mass spectrometry or multi-dimensional protein identification techniques 
(advanced HPLC systems coupled with mass spectrometry). Sub disciplines include phosphoproteomics, 
glycoproteomics and other methods to detect chemically modified proteins. 

• Metabolomics: Organismal, tissue, or cell level measurements of all small-molecules known as metabolites. 

• Glycomics: Organismal, tissue, or cell level measurements of carbohydrates. 

• Lipidomics: Organismal, tissue, or cell level measurements of lipids. 

In addition to the identification and quantification of the above given molecules further techniques analyze the 
dynamics and interactions within a cell. This includes: 

• Interactomics: Organismal, tissue, or cell level study of interactions between molecules. Currently the 
authoratative molecular discipline in this field of study is protein-protein interactions (PPI), although the working 
definition does not pre-clude inclusion of other molecular disciplines such as those defined here. 

• Fluxomics: Organismal, tissue, or cell level measurements of molecular dynamic changes over time. 

• Biomics: systems analysis of the biome. 

The investigations are frequently combined with large scale perturbation methods, including gene-based (RNAi, 
mis-expression of wild type and mutant genes) and chemical approaches using small molecule libraries. Robots and 
automated sensors enable such large-scale experimentation and data acquisition. These technologies are still 
emerging and many face problems that the larger the quantity of data produced, the lower the quality. A wide variety 
of quantitative scientists (computational biologists, statisticians, mathematicians, computer scientists, engineers, and 
physicists) are working to improve the quality of these approaches and to create, refine, and retest the models to 
accurately reflect observations. 

The systems biology approach often involves the development of mechanistic models, such as the reconstruction of 
dynamic systems from the quantitative properties of their elementary building blocks. For instance, a cellular 

network can be modelled mathematically using methods coming from chemical kinetics and control theory. Due to 
the large number of parameters, variables and constraints in cellular networks, numerical and computational 
techniques are often used. Other aspects of computer science and informatics are also used in systems biology. These 

Complex Systems Biology 


include new forms of computational model, such as the use of process calculi to model biological processes, the 
integration of information from the literature, using techniques of information extraction and text mining, the 
development of online databases and repositories for sharing data and models, approaches to database integration 
and software interoperability via loose coupling of software, websites and databases, or commercial suits, and the 
development of syntactically and semantically sound ways of representing biological models. 

See also 

Related fields 


Biological network inference 

Biological systems engineering 

Biomedical cybernetics 


Computational biology 

Computational systems biology 

Complex systems 

Complex systems biology 

Extrapolation based molecular systems 


Theoretical Biophysics 

Network Biology 

Relational Biology 

Translational Research 


Synthetic biology 

Systems biology modeling 

Systems ecology 

Systems genetics 

Systems immunology 

Related terms 


Biological organisation 

Artificial life 

Gene regulatory network 

Metabolic network modelling 

Living systems theory 

Network Theory of Aging 


Systems Biology Markup Language 


Systems Biology Graphical Notation 



Viable System Model 


Systems biologists 

• Category:Systems biologists 

• Category:Systems biologists 

• List of systems biology conferences 

• List of omics topics in biology 

• List of publications in systems biology 

• List of systems biology research groups 

• List of systems biology visualization 


[I] Snoep J.L. and Westerhoff H.V.; Alberghina L. and Westerhoff H.V. (Eds.) (2005). "From isolation to integration, a systems biology 
approach for building the Silicon Cell". Systems Biology: Definitions and Perspectives. Springer- Verlag. p. 7. 

[2] "Systems Biology — the 21st Century Science" ( 

Systems_Biology_— _the_21st_Century_Science). . 
[3] Sauer, U. et al. (27 April 2007). "Getting Closer to the Whole Picture". Science 316: 550. doi: 10.1 126/science. 1 142502. PMID 17463274. 
[4] Denis Noble (2006). The Music of Life: Biology beyond the genome. Oxford University Press. ISBN 978-0199295739. p21 
[5] "Systems Biology: Modelling, Simulation and Experimental Validation" ( 

main_sysbio.html). . 
[6] Kholodenko B.N., Bruggeman F.J., Sauro H.M.; Alberghina L. and Westerhoff H.V. (Eds.) (2005). "Mechanistic and modular approaches to 

modeling and inference of cellular regulatory networks". Systems Biology: Definitions and Perspectives. Springer- Verlag. p. 143. 
[7] von Bertalanffy, Ludwig (1968). General System theory: Foundations, Development, Applications. George Braziller. ISBN 0807604534. 
[8] Hodgkin AL, Huxley AF (1952). "A quantitative description of membrane current and its application to conduction and excitation in nerve" 

( ?tool=pmcentrez&artid=1392413). J Physiol 111 (4): 500-544. PMID 12991237. 

PMC 1392413. 
[9] Le Novere, N (2007). "The long journey to a Systems Biology of neuronal function" ( 

fcgi?tool=pmcentrez&artid=1904462). BMC Systems Biology 1: 28. doi: 10.1 186/1752-0509-1-28. PMID 17567903. PMC 1904462. 
[10] Noble D (1960). "Cardiac action and pacemaker potentials based on the Hodgkin-Huxley equations". Nature 188: 495^-97. 

doi:10.1038/188495b0. PMID 13729365. 

[II] Mesarovic, M. D. (1968). Systems Theory and Biology. Springer- Verlag. 

[12] "A Means Toward a New Holism" (http://www.jstor.Org/view/00368075/ap004022/00a00220/0). Science 161 (3836): 34-35. 

doi: 10.1 126/science.l61.3836.34. . 
[13] "Working the Systems" ( 

working_the_sy stems/ (parent)/ 158).. 

Complex Systems Biology 264 

[14] Gardner, TS; di Bernardo D, Lorenz D and Collins JJ (4 July 2003). "Inferring genetic networks and identifying compound of action via 

expression profiling". Science 301: 102-1005. doi:10.1126/science.l081900. PMID 12843395. 
[15] di Bernardo, D; Thompson MJ, Gardner TS, Chobot SE, Eastwood EL, Wojtovich AP, Elliot SJ, Schaus SE and Collins JJ (March 2005). 

"Chemogenomic profiling on a genome-wide scale using reverse-engineered gene networks". Nature Biotechnology 23: 377—383. 

doi:10.1038/nbtl075. PMID 15765094. 

Further reading 

Barnes, D.J.; Chu, D. (2010). Introduction to Modelling for Biosciences ( 

imb/). Springer Verlag 

Zeng BJ. Structurity - Pan-evolution theory of bio systems ( 

EVOMEMLI. l.html) (On the theory of system biological engineering and systems medicine etc.), Hunan 

Changsha Xinghai, May, 1994. 

Hiroaki Kitano, ed (2001). Foundations of Systems Biology. MIT Press. ISBN 0-262-11266-3. 

CP Fall, E Marland, J Wagner and JJ Tyson, ed (2002). Computational Cell Biology. Springer Verlag. 

ISBN 0-387-95369-8. 

G Bock and JA Goode, ed (2002). In Silico" Simulation of Biological Processes. Novartis Foundation 

Symposium. 247. John Wiley. ISBN 0-470-84480-9. 

E Klipp, R Herwig, A Kowald, C Wierling, and H Lehrach (2005). Systems Biology in Practice. Wiley- VCH. 

ISBN 3-527-31078-9. 

L. Alberghina and H. Westerhoff, ed (2005). Systems Biology: Definitions and Perspectives. Topics in Current 

Genetics. 13. Springer Verlag. ISBN 978-3540229681. 

A Kriete, R Eils (2005). Computational Systems Biology. Elsevier. ISBN 0-12-088786-X. 

K. Sneppen and G Zocchi (2005). Physics in Molecular Biology. Cambridge University Press. 

ISBN 0-521-84419-3. 

D. Noble (2006). The Music of life. Biology beyond the genome ( Oxford 

University Press. ISBN 0199295735. 

Z. Szallasi, J. Stelling, and V.Periwal, ed (2006). System Modeling in Cellular Biology: From Concepts to Nuts 

and Bolts. MIT Press. ISBN 0-262-19548-8. 

B Palsson (2006). Systems Biology — Properties of Reconstructed Networks ( 

html). Cambridge University Press. ISBN 978-0-521-85903-5. 

K Kaneko (2006). Life: An Introduction to Complex Systems Biology. Springer. ISBN 3540326669. 

U Alon (2006). An Introduction to Systems Biology: Design Principles of Biological Circuits. CRC Press. 

ISBN 1-58488-642-0. - emphasis on Network Biology (For a comparative review of Alon, Kaneko and Palsson 

see Werner, E. (March 29, 2007). "All systems go" ( 

446493a.pdf) (PDF). Nature 446: 493-4. doi:10.1038/446493a.) 

Andriani Daskalaki, ed (October 2008). Handbook of Research on Systems Biology Applications in Medicine. 

Medical Information Science Reference. ISBN 978-1-60566-076-9. 

Huma M. Lodhi, Stephen H. Muggleton (February 2010). Elements of Computational Systems Biology. John 

Wiley. ISBN 978-0-470-18093-8. 

Complex Systems Biology 265 


• BMC Systems Biology ( - open access journal on systems biology 

• Molecular Systems Biology ( - open access journal on systems biology 

• IET Systems Biology ( - not open access journal on systems biology 

• WIRES Systems Biology and Medicine ( 
html) - open access review journal on systems biology and medicine 

• EURASIP Journal on Bioinformatics and Systems Biology ( 

• Systems and Synthetic Biology (http://www.springer.eom/biomed/journal/l 1693) 

• International Journal of Computational Intelligence in Bioinformatics and Systems Biology (http://www. 
inderscience. com/browse/index. php?journalCODE=ijcibsb) 


• Zeng BJ., On the concept of system biological engineering, Communication on Transgenic Animals, CAS, June, 

• Zeng BJ., Transgenic expression system - goldegg plan (termed system genetics as the third wave of genetics), 
Communication on Transgenic Animals, CAS, Nov. 1994. 

• Zeng BJ., From positive to synthetic medical science, Communication on Transgenic Animals, CAS, Nov. 1995. 

• Binnewies, Tim Terence, Miller, WG, Wang, G. (2008). "The complete genome sequence and analysis of the 
human pathogen Campylobacter lari" ( 
aspx?lg=showcommon&id=231324). Foodborne Pathog Disease 5 (4): 371-386. doi:10.1089/fpd.2008.0101. 
PMID 18713059. 

• Tomita M, Hashimoto K, Takahashi K, Shimizu T, Matsuzaki Y, Miyoshi F, Saito K, Tanida S, Yugi K, Venter 
JC, Hutchison CA (1997). "E-CELL: Software Environment for Whole Cell Simulation" (http://web.sfc.keio. Genome Inform Ser Workshop 
Genome Inform. 8: 147-155. PMID 11072314. 

• Wolkenhauer O. (2001). "Systems biology: The reincarnation of systems theory applied in biology?". Briefings in 
Bioinformatics 2 (3): 258-270. doi:10.1093/bib/2.3.258. PMID 11589586. 

• "Special Issue: Systems Biology" ( Science 295 
(5560). March 1,2002. 

• Marc Vidal and Eileen E. M. Furlong (2004). "From OMICS to systems biology" ( 
journal/v5/nl0/poster/omics/index.html). Nature Reviews Genetics. 

• Marc Facciotti, Richard Bonneau, Leroy Hood and Nitin Baliga (2004). "Systems Biology Experimental Design - 
Considerations for Building Predictive Gene Regulatory Network Models for Prokaryotic Systems" (http://www. Current Genomics. 

• Basso K, Margolin AA, Stolovitzky G, Klein U, Dalla-Favera R, Califano A (April 2005). "Reverse engineering 
of regulatory networks in human B cells". Nat. Genet. 37 (4): 382-90. doi:10.1038/ngl532. PMID 15778709. 

• Mario Jardon Systems Biology: An Overview (http://www.scq. - a review from the Science 
Creative Quarterly, 2005 

• Johnjoe McFadden, (, 12996,1477776,00. 
html) - The unselfish gene: The new biology is reasserting the primacy of the whole organism - the individual - 
over the behaviour of isolated genes', The Guardian (May 6, 2005) 

• Pharaoh, M.C. (online). Looking to systems theory for a reductive explanation of phenomenal experience and 
evolutionary foundations for higher order thought (http://homepage.ntlworld.eom/m.pharoah/) Retrieved Jan, 
15 2008. 

• WTEC Panel Report on International Research and Development in Systems Biology ( 
sysbio/welcome.htm) (2005) 

Complex Systems Biology 266 

• E. Werner, "The Future and Limits of Systems Biology", Science STKE ( 
vol2005/issue278/) 2005, pel6 (2005). 

• Francis J. Doyle and Jorg Stelling, "Systems interface biology" ( 
asp?genre=article&doi=10.1098/rsif.2006.0143) /. R. Soc. Interface Vol 3, No 10 2006 

• Kahlem, P. and Birney E. (2006). "Dry work in a wet world: computation in systems biology" (http://www. Mol Syst Biol 2: 40. doi:10.1038/msb4100080. PMID 16820781. 
PMC 1681512. 

• E. Werner (March 2007). "All systems go" ( 
446493a.pdf) (PDF). Nature 446 (7135): 493-4. doi:10.1038/446493a. (Review of three books (Alon, Kaneko, 
and Palsson) on systems biology.) 

• Santiago Schnell, Ramon Grima, Philip K. Maini (March-April 2007). "Multiscale Modeling in Biology" (http:// American Scientist 95: 134—142. 

• TS Gardner, D di Bernardo, D Lorenz and JJ Collins (2003). "Inferring genetic networks and identifying 
compound of action via expression profiling" ( Science 301 (5629): 
102-5. doi:10.1126/science.l081900. PMID 12843395. 

• Jeffery C. Way and Pamela A. Silver, Why We Need Systems Biology ( 

• H.S. Wiley (June 2006). "Systems Biology - Beyond the Buzz" (http://www.the-scientist.eom/2006/6/l/52/ 
1/). The Scientist. 

• Nina Flanagan, "Systems Biology Alters Drug Development." ( 
aspx?aid=2337) Genetic Engineering & Biotechnology News, January 2008 

• Donckels Brecht, "Optimal experimental design to discriminate among rival dyanamic mathematical models" 
( PhD Thesis. Faculty of Bioscience 
Engineering. Ghent University, pp. 287. (2009) 

External links 

• Institute for Systems Biology: SBI ( 

• Applied BioDynamics Laboratory: Boston University ( 

• Institute for Research in Immunology and Cancer (IRIC): Universite de Montreal ( 

• Systems Biology - ( 

• Systems Biology Portal ( - administered by the Systems Biology Institute 

• Semantic Systems Biology ( 

• ( - The Swiss Initiative in Systems Biology 

• Systems Biology at the Pacific Northwest National Laboratory ( 

• Institute of Bioinformatics and Systems Biology (, National Chiao Tung 
University, Taiwan 




In general usage, complexity tends to be used to characterize something with many parts in intricate arrangement. 
The study of these complex linkages is the main goal of network theory and network science. In science there are at 
this time a number of approaches to characterizing complexity, many of which are reflected in this article. In a 
business context, complexity management is the methodology to minimize value-destroying complexity and 
efficiently control value-adding complexity in a cross-functional approach. 

Definitions are often tied to the concept of a "system" — a set of parts or elements which have relationships among 
them differentiated from relationships with other elements outside the relational regime. Many definitions tend to 
postulate or assume that complexity expresses a condition of numerous elements in a system and numerous forms of 
relationships among the elements. At the same time, what is complex and what is simple is relative and changes with 

Some definitions key on the question of the probability of encountering a given condition of a system once 
characteristics of the system are specified. Warren Weaver has posited that the complexity of a particular system is 
the degree of difficulty in predicting the properties of the system if the properties of the system's parts are given. In 
Weaver's view, complexity comes in two forms: disorganized complexity, and organized complexity. Weaver's 
paper has influenced contemporary thinking about complexity. 

The approaches which embody concepts of systems, multiple elements, multiple relational regimes, and state spaces 
might be summarized as implying that complexity arises from the number of distinguishable relational regimes (and 
their associated state spaces) in a defined system. 

Some definitions relate to the algorithmic basis for the expression of a complex phenomenon or model or 
mathematical expression, as is later set out herein. 

Disorganized complexity 
vs. organized complexity 

One of the problems in addressing 
complexity issues has been 
distinguishing conceptually between 
the large number of variances in 
relationships extant in random 
collections, and the sometimes large, 
but smaller, number of relationships 
between elements in systems where 
constraints (related to correlation of 
otherwise independent elements) 
simultaneously reduce the variations 
from element independence and create 
distinguishable regimes of 

more-uniform, or correlated, relationships, or interactions. 

Weaver perceived and addressed this problem, in at least a preliminary way, in drawing a distinction between 
"disorganized complexity" and "organized complexity". 

Map of Complexity Science " . The web version of this map provides internet links to 
many of the leading scholars and areas of research in complexity science. 



In Weaver's view, disorganized complexity results from the particular system 
having a very large number of parts, say millions of parts, or many more. 
Though the interactions of the parts in a "disorganized complexity" situation can 
be seen as largely random, the properties of the system as a whole can be 
understood by using probability and statistical methods. 

A prime example of disorganized complexity is a gas in a container, with the gas 
molecules as the parts. Some would suggest that a system of disorganized 
complexity may be compared, for example, with the (relative) simplicity of the 
planetary orbits — the latter can be known by applying Newton's laws of motion, 
though this example involved highly correlated events. 

Organized complexity, in Weaver's view, resides in nothing else than the 
non-random, or correlated, interaction between the parts. These correlated 
relationships create a differentiated structure which can, as a system, interact 
with other systems. The coordinated system manifests properties not carried by, 
or dictated by, individual parts. The organized aspect of this form of complexity 
vis a vis other systems than the subject system can be said to "emerge," without 
any "guiding hand". 


TTie abowe map is a conceptual and historical overview or 
complexity science. 

"[tie Map is to be read as follows; 

First, the Map is roughly historiraLworkingasa limelineihai is 
divided into five major periods that ore unread from left to 
right; 1 J old-school, i] per caption, 31 the new science of 
complexity, 4) a work in progress, and 5] recent developments, 

Each fields of si udy is represented as d-o-u ble-lined -ellipse, with 
a double-lined arrow moving from left to the right, The 
relative size of these ellipses Is meaning les^and is strictly a 
function of trie space needed to write the name of each field 
Double-lined arrows represent the- trajectory of each field of 
study 5 pace con5lrainl5 requited that ihe length of these 
arrows be limited: reader sho^d trVrelore assume that all of 
them extend outward to 2M6. 

Tile decision Where to place the various fields of research 
respective to one another is somewhat arbitrary. However, we 
did try to position relative to some degree of intellectual 
similarity. For example, those sciences oriented toward the 
study of systems are heated at the top of the map; the 
sciences that te-nd to extend outward from or around cyber- 
netics and artificial intelligence and are oriented toward the 
development of tornputational method are located at the 

Areas of research identified for each field of Study are repre- 
sented as single-lined circles. As with I he fieldsof study, the 
size of these circles is strictly a function of the space needed to 
write *e different names. 

The intel lectual links. amongst the fields of study and amongst 
■he areas of research are represented with a bold single-lined 
snow. Trie head of the arrow indicates the direction of the 
relarlonsh Ip. In some cases, the relationship Is mutual. To keep 
the map simple, rather than draw this link to the trajectory for 
a field of study or area of research (as in the case of the recip- 
rocal relationsri Ip between co m plexrty science and agent- 
based modeling!, we draw it to the ellipse representing the 
field of study orarea of research. 

For each area of research we a Iso Include a short list of the 
leading scholars. This list is not exhaustive; but il is representa- 
tive, based on number of citation*, -general recognitkin,and 
importance In the historical development of the area of 
research. For each scholar we providethe following Informa- 
tiomname. most widely known contributlon.and links to key 
areas of research The links amongst ihe scholars and I heir 
respective areas of research are represented by a dashed Irn e. 
One will also note that the names of the scholars differ In font 
size. This was done to demonstrate their relative Importance 
within com ptasdty science and i 1 1 s soctoiogy uf complexity. 

Because of the diversity of research in complexity science, we 
focused on the key topics In the field 

Map legend 

The number of parts does not have to be very large for a particular system to have emergent properties. A system of 
organized complexity may be understood in its properties (behavior among the properties) through modeling and 
simulation, particularly modeling and simulation with computers. An example of organized complexity is a city 

neighborhood as a living mechanism, with the neighborhood people among the system's parts 


Sources and factors of complexity 

The source of disorganized complexity is the large number of parts in the system of interest, and the lack of 
correlation between elements in the system. 

There is no consensus at present on general rules regarding the sources of organized complexity, though the lack of 
randomness implies correlations between elements. See e.g. Robert Ulanowicz's treatment of ecosystems. 
Consistent with prior statements here, the number of parts (and types of parts) in the system and the number of 
relations between the parts would have to be non-trivial — however, there is no general rule to separate "trivial" from 

Complexity of an object or system is a relative property. For instance, for many functions (problems), such a 
computational complexity as time of computation is smaller when multitape Turing machines are used than when 
Turing machines with one tape are used. Random Access Machines allow one to even more decrease time 
complexity (Greenlaw and Hoover 1998: 226), while inductive Turing machines can decrease even the complexity 
class of a function, language or set (Burgin 2005). This shows that tools of activity can be an important factor of 

Complexity 269 

Specific meanings of complexity 

In several scientific fields, "complexity" has a specific meaning : 

• In computational complexity theory, the amounts of resources required for the execution of algorithms is studied. 
The most popular types of computational complexity are the time complexity of a problem equal to the number of 
steps that it takes to solve an instance of the problem as a function of the size of the input (usually measured in 
bits), using the most efficient algorithm, and the space complexity of a problem equal to the volume of the 
memory used by the algorithm (e.g., cells of the tape) that it takes to solve an instance of the problem as a 
function of the size of the input (usually measured in bits), using the most efficient algorithm. This allows to 
classify computational problems by complexity class (such as P, NP ... ). An axiomatic approach to computational 
complexity was developed by Manuel Blum. It allows one to deduce many properties of concrete computational 
complexity measures, such as time complexity or space complexity, from properties of axiomatically defined 

• In algorithmic information theory, the Kolmogorov complexity (also called descriptive complexity, algorithmic 
complexity or algorithmic entropy) of a string is the length of the shortest binary program which outputs that 
string. Different kinds of Kolmogorov complexity are studied: the uniform complexity, prefix complexity, 
monotone complexity, time-bounded Kolmogorov complexity, and space-bounded Kolmogorov complexity. An 
axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark 
Burgin in the paper presented for publication by Andrey Kolmogorov (Burgin 1982). The axiomatic approach 
encompasses other approaches to Kolmogorov complexity. It is possible to treat different kinds of Kolmogorov 
complexity as particular cases of axiomatically defined generalized Kolmogorov complexity. Instead, of proving 
similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce 
all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of 
the axiomatic approach in mathematics. The axiomatic approach to Kolmogorov complexity was further 
developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and 
Burgin, 2003). 

• In information processing, complexity is a measure of the total number of properties transmitted by an object and 
detected by an observer. Such a collection of properties is often referred to as a state. 

• In business, complexity describes the variances and their consequences in various fields such as product portfolio, 
technologies, markets and market segments, locations, manufacturing network, customer portfolio, IT systems, 
organization, processes etc. 

• In physical systems, complexity is a measure of the probability of the state vector of the system. This should not 
be confused with entropy; it is a distinct mathematical measure, one in which two distinct states are never 
conflated and considered equal, as is done for the notion of entropy statistical mechanics. 

• In mathematics, Krohn-Rhodes complexity is an important topic in the study of finite semigroups and automata. 

• In software engineering, programming complexity is a measure of the interactions of the various elements of the 
software. This differs from the computational complexity described above in that it is a measure of the design of 
the software. 

There are different specific forms of complexity: 

• In the sense of how complicated a problem is from the perspective of the person trying to solve it, limits of 
complexity are measured using a term from cognitive psychology, namely the hrair limit. 

• Complex adaptive system denotes systems which have some or all of the following attributes 

• The number of parts (and types of parts) in the system and the number of relations between the parts is 
non-trivial — however, there is no general rule to separate "trivial" from "non-trivial"; 

• The system has memory or includes feedback; 

• The system can adapt itself according to its history or feedback; 

Complexity 270 

• The relations between the system and its environment are non-trivial or non-linear; 

• The system can be influenced by, or can adapt itself to, its environment; and 

• The system is highly sensitive to initial conditions. 

Study of complexity 

Complexity has always been a part of our environment, and therefore many scientific fields have dealt with complex 
systems and phenomena. Indeed, some would say that only what is somehow complex — what displays variation 
without being random — is worthy of interest. 

The use of the term complex is often confused with the term complicated. In today's systems, this is the difference 
between myriad connecting "stovepipes" and effective "integrated" solutions. This means that complex is the 
opposite of independent, while complicated is the opposite of simple. 

While this has led some fields to come up with specific definitions of complexity, there is a more recent movement 
to regroup observations from different fields to study complexity in itself, whether it appears in anthills, human 
brains, or stock markets. One such interndisciplinary group of fields is relational order theories. 

Complexity topics 
Complex behaviour 

The behaviour of a complex system is often said to be due to emergence and self-organization. Chaos theory has 
investigated the sensitivity of systems to variations in initial conditions as one cause of complex behaviour. 

Complex mechanisms 

Recent developments around artificial life, evolutionary computation and genetic algorithms have led to an 
increasing emphasis on complexity and complex adaptive systems. 

Complex simulations 

In social science, the study on the emergence of macro-properties from the micro-properties, also known as 
macro-micro view in sociology. The topic is commonly recognized as social complexity that is often related to the 
use of computer simulation in social science, i.e.: computational sociology. 

Complex systems 

Systems theory has long been concerned with the study of complex systems (In recent times, complexity theory and 
complex systems have also been used as names of the field). These systems can be biological, economic, 
technological, etc. Recently, complexity is a natural domain of interest of the real world socio-cognitive systems and 
emerging systemics research. Complex systems tend to be high-dimensional, non-linear and hard to model. In 
specific circumstances they may exhibit low dimensional behaviour. 

Complexity in data 

In information theory, algorithmic information theory is concerned with the complexity of strings of data. 

Complex strings are harder to compress. While intuition tells us that this may depend on the codec used to compress 
a string (a codec could be theoretically created in any arbitrary language, including one in which the very small 
command "X" could cause the computer to output a very complicated string like "18995316"), any two 
Turing-complete languages can be implemented in each other, meaning that the length of two encodings in different 
languages will vary by at most the length of the "translation" language — which will end up being negligible for 
sufficiently large data strings. 

Complexity 27 1 

These algorithmic measures of complexity tend to assign high values to random noise. However, those studying 
complex systems would not consider randomness as complexity. 

Information entropy is also sometimes used in information theory as indicative of complexity. 

Applications of complexity 

Computational complexity theory is the study of the complexity of problems — that is, the difficulty of solving them. 
Problems can be classified by complexity class according to the time it takes for an algorithm — usually a computer 
program — to solve them as a function of the problem size. Some problems are difficult to solve, while others are 
easy. For example, some difficult problems need algorithms that take an exponential amount of time in terms of the 
size of the problem to solve. Take the travelling salesman problem, for example. It can be solved in time 0(n^2 n ) 
(where n is the size of the network to visit — let's say the number of cities the travelling salesman must visit exactly 
once). As the size of the network of cities grows, the time needed to find the route grows (more than) exponentially. 
Even though a problem may be computationally solvable in principle, in actual practice it may not be that simple. 
These problems might require large amounts of time or an inordinate amount of space. Computational complexity 
may be approached from many different aspects. Computational complexity can be investigated on the basis of time, 
memory or other resources used to solve the problem. Time and space are two of the most important and popular 
considerations when problems of complexity are analyzed. 

There exist a certain class of problems that although they are solvable in principle they require so much time or 
space that it is not practical to attempt to solve them. These problems are called intractable. 

There is another form of complexity called hierarchical complexity. It is orthogonal to the forms of complexity 
discussed so far, which are called horizontal complexity 

See also 

Chaos theory 

Command and Control Research Program 

Complexity theory (disambiguation page) 

Cyclomatic complexity 

Digital morphogenesis 

Evolution of complexity 

Game complexity 

Holism in science 


Model of Hierarchical Complexity 

Names of large numbers 

Network science 

Network theory 

Novelty theory 

Occam's razor 

Process architecture 

Programming Complexity 

Sociology and complexity science 

Systems theory 

Variety (cybernetics) 

Volatility, uncertainty, complexity and ambiguity 

Complexity 272 


[1] Weaver, Warren (1948). "Science and Complexity" ( American 

Scientist 36 (4): 536. PMID 18882675. 1 accessdate = 2007-11-21 
[2] Johnson, Steven (2001). Emergence: the connected lives of ants, brains, cities, and software. New York: Scribner. p. 46. 

ISBN 0-684-86875-X. 

[4] Jacobs, Jane (1961). The Death and Life of Great American Cities. New York: Random House. 
[5] Ulanowicz, Robert, "Ecology, the Ascendant Perspective", Columbia, 1997 
[6] Johnson, Neil F. (2007). Two's Company, Three is Complexity: A simple guide to the science of all sciences. Oxford: Oneworld. 

ISBN 978-1-85168-488-5. 
[7] Lissack, Michael R.; Johan Roos (2000). The Next Common Sense, The e-Manager's Guide to Mastering Complexity. Intercultural Press. 

ISBN 9781857882353. 

Further reading 

Lewin, Roger (1992). Complexity: Life at the Edge of Chaos. New York: Macmillan Publishing Co. 

ISBN 9780025704855. 

Waldrop, M. Mitchell (1992). Complexity: The Emerging Science at the Edge of Order and Chaos. New York: 

Simon & Schuster. ISBN 9780671767891. 

Czerwinski, Tom; David Alberts (1997). Complexity, Global Politics, and National Security (http://www. National Defense University. ISBN 9781579060466. 

Czerwinski, Tom (1998). Coping with the Bounds: Speculations on Nonlinearity in Military Affairs (http://www. CCRP. ISBN 9781414503158 (from Pavilion Press, 2004). 

Lissack, Michael R.; Johan Roos (2000). The Next Common Sense, The e-Manager's Guide to Mastering 

Complexity. Intercultural Press. ISBN 9781857882353. 

Sole, R. V.; B. C. Goodwin (2002). Signs of Life: How Complexity Pervades Biology. Basic Books. 

ISBN 9780465019281. 

Moffat, James (2003). Complexity Theory and Network Centric Warfare ( 

Moffat_Complexity.pdf). CCRP. ISBN 9781893723115. 

Smith, Edward (2006). Complexity, Networking, and Effects Based Approaches to Operations (http://www. CCRP. ISBN 9781893723184. 

Heylighen, Francis (2008). " Complexity and Self-Organization ( 

ELIS-Complexity.pdf)". In Bates, Marcia J.; Maack, Mary Niles. Encyclopedia of Library and Information 

Sciences. CRC. ISBN 9780849397127 

Greenlaw, N. and Hoover, H.J. Fundamentals of the Theory of Computation, Morgan Kauffman Publishers, San 

Francisco, 1998 

Blum, M. (1967) On the Size of Machines, Information and Control, v. 11, pp. 257—265 

Burgin, M. (1982) Generalized Kolmogorov complexity and duality in theory of computations, Notices of the 

Russian Academy of Sciences, v. 25, No. 3, pp. 19—23 

Mark Burgin (2005), Super-recursive algorithms, Monographs in computer science, Springer. 

Burgin, M. and Debnath, N. Hardship of Program Utilization and User-Friendly Software, in Proceedings of the 

International Conference "Computer Applications in Industry and Engineering" , Las Vegas, Nevada, 2003, 

pp. 314-317 

Debnath, N.C. and Burgin, M., (2003) Software Metrics from the Algorithmic Perspective, in Proceedings of the 

ISCA 18th International Conference "Computers and their Applications", Honolulu, Hawaii, pp. 279—282 

Meyers, R.A., (2009) "Encyclopedia of Complexity and Systems Science", ISBN 978-0-387-75888-6 

Caterina Liberati, J. Andrew Howe, Hamparsum Bozdogan, Data Adaptive Simultaneous Parameter and Kernel 

Selection in Kernel Discriminant Analysis Using Information Complexity ( 

article/view/ 1 17), Journal of Pattern Recognition Research, JPRR (, Vol 4, No 1, 2009. 

Complexity 273 

• Gershenson, C. and F. Heylighen (2005). How can we think the complex? ( 
0402023) In Richardson, Kurt (ed.) Managing Organizational Complexity: Philosophy, Theory and Application, 
Chapter 3. Information Age Publishing. 

External links 

• Quantifying Complexity Theory ( — classification of complex 

• Complexity Measures ( — an article 
about the abundance of not-that-useful complexity measures. 

• UC Four Campus Complexity Videoconferences ( — 
Human Sciences and Complexity 

• Complexity Digest ( — networking the complexity community 

• The Santa Fe Institute ( — engages in research in complexity related topics 

• Exploring Complexity in Science and Technology ( 
ExploringComplexityFall2009/index.html) — A introductory course about complex system by Melanie Mitchell 

Complex adaptive system 

Complex adaptive systems are special cases of complex systems. They are complex in that they are dynamic 
networks of interactions and relationships not aggregations of static entities. They are adaptive in that their 
individual and collective behaviour changes as a result of experience. 


The term complex adaptive systems, or complexity science, is often used to describe the loosely organized academic 
field that has grown up around the study of such systems. Complexity science is not a single theory — it 
encompasses more than one theoretical framework and is highly interdisciplinary, seeking the answers to some 
fundamental questions about living, adaptable, changeable systems. 

Examples of complex adaptive systems include the stock market, social insect and ant colonies, the biosphere and 
the ecosystem, the brain and the immune system, the cell and the developing embryo, manufacturing businesses and 
any human social group-based endeavour in a cultural and social system such as political parties or communities. 
There are close relationships between the field of CAS and artificial life. In both areas the principles of emergence 
and self-organization are very important. 

The ideas and models of CAS are essentially evolutionary, grounded in modern chemistry, biological views on 
adaptation, exaptation and evolution and simulation models in economics and social systems. 


A CAS is a complex, self-similar collection of interacting adaptive agents. The study of CAS focuses on complex, 
emergent and macroscopic properties of the system. Various definitions have been offered by different researchers: 

• John H. Holland 

A Complex Adaptive System (CAS) is a dynamic network of many agents (which may represent cells, species, 
individuals, firms, nations) acting in parallel, constantly acting and reacting to what the other agents are doing. 
The control of a CAS tends to be highly dispersed and decentralized. If there is to be any coherent behavior in 
the system, it has to arise from competition and cooperation among the agents themselves. The overall 
behavior of the system is the result of a huge number of decisions made every moment by many individual 

Complex adaptive system 274 

* [2] 


A CAS behaves/evolves according to three key principles: order is emergent as opposed to predetermined (c.f. 
Neural Networks), the system's history is irreversible, and the system's future is often unpredictable. The basic 
building blocks of the CAS are agents. Agents scan their environment and develop schema representing 
interpretive and action rules. These schema are subject to change and evolution. 

• Other definitions 

Macroscopic collections of simple (and typically nonlinear) interacting units that are endowed with the ability 
to evolve and adapt to a changing environment. 

General properties 

What distinguishes a CAS from a pure multi-agent system (MAS) is the focus on top-level properties and features 
like self-similarity, complexity, emergence and self-organization. A MAS is simply defined as a system composed of 
multiple interacting agents. In CASs, the agents as well as the system are adaptive: the system is self-similar. A CAS 
is a complex, self-similar collectivity of interacting adaptive agents. Complex Adaptive Systems are characterised by 
a high degree of adaptive capacity, giving them resilience in the face of perturbation. 

Other important properties are adaptation (or homeostasis), communication, cooperation, specialization, spatial and 
temporal organization, and of course reproduction. They can be found on all levels: cells specialize, adapt and 
reproduce themselves just like larger organisms do. Communication and cooperation take place on all levels, from 
the agent to the system level. The forces driving co-operation between agents in such a system can, in some cases be 
analysed with game theory. 


Complex adaptive systems are characterised as follows and the most important are: 

• The number of elements is sufficiently large that a conventional description (e.g. a system of differential 
equations) are not only impractical, but cease to assist in understanding the system, the elements also have to 
interact and the interaction must be dynamic. Interactions can be physical or involve the exchange of information. 

• Such interactions are rich, i.e. any element in the system is affected and affects several other systems. 

• The interactions are non-linear which means that small causes can have large results. 

• Interactions are primarily but not exclusively with immediate neighbours and the nature of the influence is 

• Any interaction can feed back onto itself directly or after a number of intervening stages, such feedback can vary 
in quality. The is known as recurrency. 

• Such systems are open and it may be difficult or impossible to define system boundaries 

• Complex systems operate under far from equilibrium conditions, there has to be a constant flow of energy to 
maintain the organisation of the system 

• All complex systems have a history, they evolve and their past is co-responsible for their present behaviour 

• Elements in the system are ignorant of the behaviour of the system as a whole responding only to what is 
available to it locally 

Axelrod & Cohen identify a series of key terms from a modeling perspective: 

• Strategy, a conditional action pattern that indicates what to do in which circumstances 

• Artifact, a material resource that has definite location and can respond to the action of agents 

• Agent, a collection of properties, strategies & capabilities for interacting with artifacts & other agents 

• Population, a collection of agents, or, in some situations, collections of strategies 

• System, a larger collection, including one or more populations of agents and possibly also artifacts. 

• Type, all the agents (or strategies) in a population that have some characteristic in common 

Complex adaptive system 


• Variety, the diversity of types within a population or system 

• Interaction pattern, the recurring regularities of contact among types within a system 

• Space (physical), location in geographical space & time of agents and artifacts 

• Space (conceptual), "location" in a set of categories structured so that "nearby" agents will tend to interact 

• Selection, processes that lead to an increase or decrease in the frequency of various types of agent or strategies 

• Success criteria or performance measures, a "score" used by an agent or designer in attributing credit in the 
selection of relatively successful (or unsuccessful) strategies or agents. 

Evolution of complexity 

Living organisms are complex adaptive 
systems. Although complexity is hard to 

quantify in biology, evolution has produced 

some remarkably complex organisms. 

This observation has led to the common 

misconception of evolution being 

progressive and leading towards what are 

viewed as "higher organisms 

m [8] 



Passive trend 

Active trend 







Passive versus active trends in the evolution of complexity. CAS at the beginning 

of the processes are colored red. Changes in the number of systems are shown by 

the height of the bars, with each set of graphs moving up in a time series. 

If this were generally true, evolution would 
possess an active trend towards complexity. 
As shown below, in this type of process the 
value of the most common amount of 
complexity would increase over time. 
Indeed, some artificial life simulations have 
suggested that the generation of CAS is an 
inescapable feature of evolution. 

However, the idea of a general trend 

towards complexity in evolution can also be 

explained through a passive process. This 

involves an increase in variance but the most common value, the mode, does not change. Thus, the maximum level 

of complexity increases over time, but only as an indirect product of there being more organisms in total. This type 

of random process is also called a bounded random walk. 

In this hypothesis, the apparent trend towards more complex organisms is an illusion resulting from concentrating on 
the small number of large, very complex organisms that inhabit the right-hand tail of the complexity distribution and 
ignoring simpler and much more common organisms. This passive model emphasizes that the overwhelming 

majority of species are microscopic prokaryotes, which comprise about half the world's biomass and constitute 

the vast majority of Earth's biodiversity. Therefore, simple life remains dominant on Earth, and complex life 

appears more diverse only because of sampling bias. 

This lack of an overall trend towards complexity in biology does not preclude the existence of forces driving systems 
towards complexity in a subset of cases. These minor trends are balanced by other evolutionary pressures that drive 
systems towards less complex states. 

Complex adaptive system 


See also 

Agent-based model 

Artificial life 

Center for Complex Systems and Brain Sciences 

Center for Social Dynamics & Complexity (CSDC) at Arizona State University 

Cognitive Science 

Command and Control Research Program 

Complex system 


Computational Sociology 

Enterprise systems engineering 

Generative sciences 

Santa Fe Institute 

Simulated reality 

Sociology and complexity science 

Swarm Development Group 


[I] A Juarrero. (2000). Dynamics in Action: Intentional behaviour as a complex system. MIT Press. ISBN 9780262100816. 

[2] M. Mitchell Waldrop. (1994). Complexity: the emerging science at the edge of order and chaos. Harmondsworth [Eng.]: Penguin. 

ISBN 0-14-017968-2. 
[3] K. Dooley, AZ State University ( 
[4] Complexity in Social Science glossary (http://www. php?letter=C) a research training project of the 

European Commission 
[5] Cilliers Paul, Complexity and Post Modernism 

[6] Harnessing Complexity 

[7] Adami C (2002). "What is complexity?". Bioessays 24 (12): 1085-94. doi:10.1002/bies.l0192. PMID 12447974. 

[8] McShea D (1991). "Complexity and evolution: What everybody knows". Biology and Philosophy 6 (3): 303-24. doi: 10.1007/BF00132234. 
[9] Carroll SB (2001). "Chance and necessity: the evolution of morphological complexity and diversity". Nature 409 (6823): 1 102—9. 

doi: 10.1038/35059227. PMID 11234024. 
[10] Furusawa C, Kaneko K (2000). "Origin of complexity in multicellular organisms". Phys. Rev. Lett. 84 (26 Pt 1): 6130—3. 

doi: 10.1 103/PhysRevLett.84.6130. PMID 10991141. 

[II] Adami C, Ofria C, Collier TC (2000). "Evolution of biological complexity" (http://www.pnas.Org/cgi/content/full/97/9/4463). Proc. 
Natl. Acad. Sci. U.S.A. 97 (9): 4463-8. doi:10.1073/pnas.97.9.4463. PMID 10781045. PMC 18257. . 

[12] Oren A (2004). "Prokaryote diversity and taxonomy: current status and future challenges" ( 

articlerender.fcgi?tool=pmcentrez&artid=1693353). Philos. Trans. R. Soc. Lond., B, Biol. Sci. 359 (1444): 623-38. 

doi:10.1098/rstb.2003.1458. PMID 15253349. PMC 1693353. 
[13] Whitman W, Coleman D, Wiebe W (1998). "Prokaryotes: the unseen majority" ( 

Proc Natl Acad Sci USA 95 (12): 6578-83. doi:10.1073/pnas.95. 12.6578. PMID 9618454. PMC 33863. . 
[14] Schloss P, Handelsman J (2004). "Status of the microbial census" (http://mmbr.asm. org/cgi/pmidlookup?view=long&pmid=15590780). 

Microbiol Mol Biol Rev 68 (4): 686-91. doi:10.1128/MMBR.68.4.686-691.2004. PMID 15590780. PMC 539005. . 


• Ahmed E, Elgazzar AS, Hegazi AS (28 June 2005). 'An overview of complex adaptive systems" (http://arxiv. 
org/abs/nlin/0506059). Mansoura J. Math 32. arXiv:nlin/0506059vl [nlin.AO]. 

• Bullock S, Cliff D (2004). Complexity and Emergent Behaviour in ICT Systems ( 
techreports/2004/HPL-2004- 187.html). Hewlett-Packard Labs. HP-2004-187.; commissioned as a report (http:// 
w w w . foresight . go v . uk/OurWork/CompletedProj ects/IIS/Docs/Complexity andEmergentB ehaviour. asp) by 

the UK government's Foresight Programme ( 

• Dooley, K., Complexity in Social Science glossary a research training project of the European Commission. 

• Edwin E. Olson and Glenda H. Eoyang (2001). Facilitating Organization Change. San Francisco: Jossey-Bass. 
ISBN 0-7879-5330-X. 

• Gell-Mann, Murray (1994). The quark and the jaguar: adventures in the simple and the complex. San Francisco: 
W.H. Freeman. ISBN 0-7167-2581-9. 

• Holland, John H. (1992). Adaptation in natural and artificial systems: an introductory analysis with applications 
to biology, control, and artificial intelligence. Cambridge, Mass: MIT Press. ISBN 0-262-58111-6. 

• Holland, John H. (1999). Emergence: from chaos to order. Reading, Mass: Perseus Books. ISBN 0-7382-0142-1. 

Complex adaptive system 277 

• Kelly, Kevin (1994) (Full text available online). Out of control: the new biology of machines, social systems and 
the economic world ( Boston: Addison-Wesley. 

ISBN 0-201-48340-8. 

• Pharaoh, M.C. (online). Looking to systems theory for a reductive explanation of phenomenal experience and 
evolutionary foundations for higher order thought (http://homepage.ntlworld.eom/m.pharoah/) Retrieved 15 
January 2008. 

External links 

• Complexity Digest ( comprehensive digest of latest CAS related news and research. 

• DNA Wales Research Group ( Current Research in Organisational change 
CAS/CES related news and free research data. Also linked to the Business Doctor & BBC documentary series 

• A description ( of complex adaptive systems on the Principia 
Cybernetica Web. 

• Quick reference ( single-page description of the 'world' of 
complexity and related ideas hosted by the Center for the Study of Complex Systems at the University of 

• Complex systems research network ( 

• The Open Agent-Based Modeling Consortium ( 

Computational biology 

Computational biology is an interdisciplinary field that applies the techniques of computer science, applied 
mathematics and statistics to address biological problems. The main focus lies in the development of computational 
and statistical data analysis methods and in developing mathematical modeling and computational simulation 
techniques. By these means it addresses scientific research topics with their theoretical and experimental questions 
without a laboratory. It is connected to the following fields: 

• Computational biomodeling, a field concerned with building computer models of biological systems. 

• Bioinformatics, which applies algorithms and statistical techniques to the interpretation, classification and 
understanding of biological datasets. These typically consist of large numbers of DNA, RNA, or protein 
sequences. Sequence alignment is used to assemble the datasets for analysis. Comparisons of homologous 
sequences, gene finding, and prediction of gene expression are the most common techniques used on assembled 
datasets; however, analysis of such datasets have many applications throughout all fields of biology. 

• Mathematical biology aims at the mathematical representation, treatment and modeling of biological processes, 
using a variety of applied mathematical techniques and tools. 

• Computational genomics, a field within genomics which studies the genomes of cells and organisms. 
High-throughput genome sequencing produces lots of data, which requires extensive post-processing (genome 
assembly) and uses DNA microarray technologies to perform statistical analyses on the genes expressed in 
individual cell types. This can help find genes of interest for certain diseases or conditions. This field also studies 
the mathematical foundations of sequencing. 

• Molecular modeling, which consists of modelling the behaviour of molecules of biological importance. 

• Protein structure prediction and structural genomics, which attempt to systematically produce accurate structural 
models for three-dimensional protein structures that have not been determined experimentally. 

• Computational biochemistry and biophysics, which make extensive use of structural modeling and simulation 
methods such as molecular dynamics and Monte Carlo method-inspired Boltzmann sampling methods in an 

Computational biology 278 

attempt to elucidate the kinetics and thermodynamics of protein functions. 


Biostatistics (a contraction of biology and statistics; sometimes referred to as biometry or biometrics) is the 
application of statistics to a wide range of topics in biology. The science of biostatistics encompasses the design of 
biological experiments, especially in medicine and agriculture; the collection, summarization, and analysis of data 
from those experiments; and the interpretation of, and inference from, the results. 

Biostatistics and the history of biological thought 

Biostatistical reasoning and modeling were of critical importance to the foundation theories of modern biology. In 
the early 1900s, after the rediscovery of Mendel's work, the conceptual gaps in understanding between genetics and 
evolutionary Darwinism led to vigorous debate between biometricians such as Walter Weldon and Karl Pearson and 
Mendelians such as Charles Davenport, William Bateson and Wilhelm Johannsen. By the 1930s statisticians and 
models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian 
modern evolutionary synthesis. 

The leading figures in the establishment of this synthesis all relied on statistics and developed its use in biology. 

• Sir Ronald A. Fisher developed several basic statistical methods in support of his work The Genetical Theory of 
Natural Selection 

• Sewall G. Wright used statistics in the development of modern population genetics 

• J. B. S Haldane's book, The Causes of Evolution, reestablished natural selection as the premier mechanism of 
evolution by explaining it in terms of the mathematical consequences of Mendelian genetics. 

These individuals and the work of other biostatisticians, mathematical biologists, and statistically inclined geneticists 
helped bring together evolutionary biology and genetics into a consistent, coherent whole that could begin to be 
quantitatively modeled. 

In parallel to this overall development, the pioneering work of D'Arcy Thompson in On Growth and Form also 
helped to add quantitative discipline to biological study. 

Despite the fundamental importance and frequent necessity of statistical reasoning, there may nonetheless have been 
a tendency among biologists to distrust or deprecate results which are not qualitatively apparent. One anecdote 
describes Thomas Hunt Morgan banning the Friden calculator from his department at Caltech, saying "Well, I am 
like a guy who is prospecting for gold along the banks of the Sacramento River in 1849. With a little intelligence, I 
can reach down and pick up big nuggets of gold. And as long as I can do that, I'm not going to let any people in my 
department waste scarce resources in placer mining.' Educators are now adjusting their curricula to focus on more 
quantitative concepts and tools. 

Education and training programs 

Almost all educational programmes in biostatistics are at postgraduate level. They are most often found in schools of 
public health, affiliated with schools of medicine, forestry, or agriculture or as a focus of application in departments 
of statistics. 

In the United States, while several universities have dedicated biostatistics departments, many other top-tier 
universities integrate biostatistics faculty into statistics or other departments, such as epidemiology. Thus 
departments carrying the name "biostatistics" may exist under quite different structures. For instance, relatively new 
biostatistics departments have been founded with a focus on bioinformatics and computational biology, whereas 
older departments, typically affiliated with schools of public health, will have more traditional lines of research 

Biostatistics 279 

involving epidemiological studies and clinical trials as well as bioinformatics. In larger universities where both a 
statistics and a biostatistics department exist, the degree of integration between the two departments may range from 
the bare minimum to very close collaboration. In general, the difference between a statistics program and a 
biostatistics one is twofold: (i) statistics departments will often host theoretical/methodological research which are 
less common in biostatistics programs and (ii) statistics departments have lines of research that may include 
biomedical applications but also other areas such as industry (quality control), business and economics and 
biological areas other than medicine. 

Applications of biostatistics 

• Public health, including epidemiology, health services research, nutrition, and environmental health 

• Design and analysis of clinical trials in medicine 

• Population genetics, and statistical genetics in order to link variation in genotype with a variation in phenotype. 
This has been used in agriculture to improve crops and farm animals (animal breeding). In biomedical research, 
this work can assist in finding candidates for gene alleles that can cause or influence predisposition to disease in 
human genetics 

• Anaysis of genomics data, for example from microarray or proteomics experiments . Often concerning 
diseases or disease stages 

• Ecology, ecological forecasting 

• Biological sequence analysis 

• Systems biology for gene network inference or pathways analysis 

Statistical methods are beginning to be integrated into medical informatics, public health informatics, bioinformatics 
and computational biology. 

Biostatistics journals 




International Journal of Biostatistics, The 

Canadian Journal of Epidemiology and Biostatistics 

Journal of Agricultural, Biological, and Environmental Statistics 

Journal of Biopharmaceutical Statistics 

Pharmaceutical Statistics 

Statistical Applications in Genetics and Molecular Biology 

Statistics in Biopharmaceutical Research 

Statistics in Medicine 

Turkiye Klinikleri Journal of Biostatistics 

Biostatistics 280 

Related fields 

Biostatistics shares several methods with quantitative fields such as: 

• computational biology 

• computer science, 

• operations research, 

• psychometrics, 

• statistics, 

• econometrics, and 

• mathematical demography 

See also 

Ecological forecasting 
Group size measures 
Machine Learning 
Network Biology 
Quantitative parasitology 
Systems Biology 


[1] Charles T. Munger (2003-10-03). "Academic Economics: Strengths and Faults After Considering Interdisciplinary Needs" (http://www. . 
[2] "Spotlight:application of quantitative concepts and techniques in undergraduate biology" ( 

Spotlights/BioMath.htm). . 
[3] Helen Causton, John Quackenbush and Alvis Brazma (2003). "Statistical Analysis of Gene Expression Microarray Data". Wiley-Blackwell. 
[4] Terry Speed (2003). "Microarray Gene Expression Data Analysis: A Beginner's Guide". Chapman & Hall/CRC. 
[5] Frank Emmert-Streib and Matthias Dehmer (2010). "Medical Biostatistics for Complex Diseases". Wiley-Blackwell. 
[6] Warren J. Ewens and Gregory R. Grant (2004). "Statistical Methods in Bioinformatics: An Introduction". Springer. 

External links 

• The International Biometric Society ( 

• The Collection of Biostatistics Research Archive ( 

• Guide to Biostatistics ( ( 

• Biostatistician ( 


Statistical Applications in Genetics and Molecular Biology ( 

Statistics in Medicine ( 

The International Journal of Biostatistics ( 

Journal of Agricultural, Biological, and Environmental Statistics ( 

Journal of Biopharmaceutical Statistics ( 

Biostatistics ( 

Biometrics ( 

Biometrika ( 

Biometrical Journal ( 

Genetics Selection Evolution ( 




Ideogran+|X| Contig+|X| 















>: F - 11.4 


4 on- 





5 OH; ji 













H 121.31 



i - 



10 Dri- 



ll CiH-i 








14 on- 





15CiH-i | 
1 V- 

HsUniG+|X| Genes.seqXI gy^d 

Bioinformatics is the application of statistics and computer 
science to the field of molecular biology. 

The term bioinformatics was coined by Paulien Hogeweg in 
1979 for the study of informatic processes in biotic systems. 
Its primary use since at least the late 1980s has been in 
genomics and genetics, particularly in those areas of genomics 
involving large-scale DNA sequencing. 

Bioinformatics now entails the creation and advancement of 
databases, algorithms, computational and statistical techniques 
and theory to solve formal and practical problems arising from 
the management and analysis of biological data. 

Over the past few decades rapid developments in genomic and 
other molecular research technologies and developments in 
information technologies have combined to produce a 
tremendous amount of information related to molecular 
biology. It is the name given to these mathematical and 
computing approaches used to glean understanding of 
biological processes. 

Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning different 
DNA and protein sequences to compare them and creating and viewing 3-D models of protein structures. 

The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from 
other approaches, however, is its focus on developing and applying computationally intensive techniques (e.g., 
pattern recognition, data mining, machine learning algorithms, and visualization) to achieve this goal. Major research 
efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein 
structure alignment, protein structure prediction, prediction of gene expression and protein-protein interactions, 
genome-wide association studies and the modeling of evolution. 

HT_ HI 1726 

.:■■;-■ 5 



f- Z-FIHL17 


==. i LOC169981 

ill"- |>MLGN3 

i!^_j : ZNT6 

¥~^jr CAPZA1P 

jL lir MXF2 

! JrZBTB33 


ill: fr~ELB4 

S: l' r LOC340581 

=T ' LOC266694 

-S— — ")'- SPANXC 



^°^— ATP2B3 

Map of the human X chromosome (from the NCBI 

website). Assembly of the human genome is one of the 

greatest achievements of bioinformatics. 


Bioinformatics was applied in the creation and maintenance of a database to store biological information at the 
beginning of the "genomic revolution", such as nucleotide and amino acid sequences. Development of this type of 
database involved not only design issues but the development of complex interfaces whereby researchers could both 
access existing data as well as submit new or revised data. 

In order to study how normal cellular activities are altered in different disease states, the biological data must be 
combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such 
that the most pressing task now involves the analysis and interpretation of various types of data, including nucleotide 
and amino acid sequences, protein domains, and protein structures. The actual process of analyzing and interpreting 
data is referred to as computational biology. Important sub-disciplines within bioinformatics and computational 
biology include: 

• the development and implementation of tools that enable efficient access to, and use and management of, various 
types of information. 

• the development of new algorithms (mathematical formulas) and statistics with which to assess relationships 
among members of large data sets, such as methods to locate a gene within a sequence, predict protein structure 

Bioinformatics 282 

and/or function, and cluster protein sequences into families of related sequences. 

Major research areas 
Sequence analysis 

Since the Phage 0-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded 
and stored in databases. This sequence information is analyzed to determine genes that encode polypeptides 
(proteins), RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes 
within a species or between different species can show similarities between protein functions, or relations between 
species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long 
ago became impractical to analyze DNA sequences manually. Today, computer programs such as BLAST are used 
daily to search the genomes of thousands of organisms, containing billions of nucleotides. These programs can 
compensate for mutations (exchanged, deleted or inserted bases) in the DNA sequence, in order to identify 
sequences that are related, but not identical. A variant of this sequence alignment is used in the sequencing process 
itself. The so-called shotgun sequencing technique (which was used, for example, by The Institute for Genomic 
Research to sequence the first bacterial genome, Haemophilus influenzae) does not produce entire chromosomes, but 
instead generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides 
long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by 
a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence 
data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as 
large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to 
assemble the fragments, and the resulting assembly will usually contain numerous gaps that have to be filled in later. 
Shotgun sequencing is the method of choice for virtually all genomes sequenced today, and genome assembly 
algorithms are a critical area of bioinformatics research. 

Another aspect of bioinformatics in sequence analysis is annotation, which involves computational gene finding to 
search for protein-coding genes, RNA genes, and other functional sequences within a genome. Not all of the 
nucleotides within a genome are part of genes. Within the genome of higher organisms, large parts of the DNA do 
not serve any obvious purpose. This so-called junk DNA may, however, contain unrecognized functional elements. 
Bioinformatics helps to bridge the gap between genome and proteome projects— for example, in the use of DNA 
sequences for protein identification. 

Genome annotation 

In the context of genomics, annotation is the process of marking the genes and other biological features in a DNA 
sequence. The first genome annotation software system was designed in 1995 by Dr. Owen White, who was part of 
the team at The Institute for Genomic Research that sequenced and analyzed the first genome of a free-living 
organism to be decoded, the bacterium Haemophilus influenzae. Dr. White built a software system to find the genes 
(places in the DNA sequence that encode a protein), the transfer RNA, and other features, and to make initial 
assignments of function to those genes. Most current genome annotation systems work similarly, but the programs 
available for analysis of genomic DNA are constantly changing and improving. 

Bioinformatics 283 

Computational evolutionary biology 

Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics 
has assisted evolutionary biologists in several key ways; it has enabled researchers to: 

• trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through 
physical taxonomy or physiological observations alone, 

• more recently, compare entire genomes, which permits the study of more complex evolutionary events, such as 
gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation, 

• build complex computational models of populations to predict the outcome of the system over time 

• track and share information on an increasingly large number of species and organisms 

Future work endeavours to reconstruct the now more complex tree of life. 

The area of research within computer science that uses genetic algorithms is sometimes confused with computational 
evolutionary biology, but the two areas are not necessarily related. 

Analysis of gene expression 

The expression of many genes can be determined by measuring mRNA levels with multiple techniques including 
microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag 
sequencing, massively parallel signature sequencing (MPSS), or various applications of multiplexed in-situ 
hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological 
measurement, and a major research area in computational biology involves developing statistical tools to separate 
signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes 
implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from 
non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population 
of cancer cells. 

Analysis of regulation 

Regulation is the complex orchestration of events starting with an extracellular signal such as a hormone and leading 
to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to 
explore various steps in this process. For example, promoter analysis involves the identification and study of 
sequence motifs in the DNA surrounding the coding region of a gene. These motifs influence the extent to which that 
region is transcribed into mRNA. Expression data can be used to infer gene regulation: one might compare 
microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each 
state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat 
shock, starvation, etc.). One can then apply clustering algorithms to that expression data to determine which genes 
are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for 
over-represented regulatory elements. 

Analysis of protein expression 

Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins 
present in a biological sample. Bioinformatics is very much involved in making sense of protein microarray and HT 
MS data; the former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the 
problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the 
complicated statistical analysis of samples where multiple, but incomplete peptides from each protein are detected. 

Bioinformatics 284 

Analysis of mutations in cancer 

In cancer, the genomes of affected cells are rearranged in complex or even unpredictable ways. Massive sequencing 
efforts are used to identify previously unknown point mutations in a variety of genes in cancer. Bioinformaticians 
continue to produce specialized automated systems to manage the sheer volume of sequence data produced, and they 
create new algorithms and software to compare the sequencing results to the growing collection of human genome 
sequences and germline polymorphisms. New physical detection technologies are employed, such as oligonucleotide 
microarrays to identify chromosomal gains and losses (called comparative genomic hybridization), and 
single-nucleotide polymorphism arrays to detect known point mutations. These detection methods simultaneously 
measure several hundred thousand sites throughout the genome, and when used in high-throughput to measure 
thousands of samples, generate terabytes of data per experiment. Again the massive amounts and new types of data 
generate new opportunities for bioinformaticians. The data is often found to contain considerable variability, or 
noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy 
number changes. 

Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent 
among many tumors . 

Prediction of protein structure 

Protein structure prediction is another important application of bioinformatics. The amino acid sequence of a protein, 
the so-called primary structure, can be easily determined from the sequence on the gene that codes for it. In the vast 
majority of cases, this primary structure uniquely determines a structure in its native environment. (Of course, there 
are exceptions, such as the bovine spongiform encephalopathy - aka Mad Cow Disease - prion.) Knowledge of this 
structure is vital in understanding the function of the protein. For lack of better terms, structural information is 
usually classified as one of secondary, tertiary and quaternary structure. A viable general solution to such 
predictions remains an open problem. As of now, most efforts have been directed towards heuristics that work most 
of the time. 

One of the key ideas in bioinformatics is the notion of homology. In the genomic branch of bioinformatics, 
homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is 
homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In 
the structural branch of bioinformatics, homology is used to determine which parts of a protein are important in 
structure formation and interaction with other proteins. In a technique called homology modeling, this information is 
used to predict the structure of a protein once the structure of a homologous protein is known. This currently remains 
the only way to predict protein structures reliably. 

One example of this is the similar protein homology between hemoglobin in humans and the hemoglobin in legumes 
(leghemoglobin). Both serve the same purpose of transporting oxygen in the organism. Though both of these 
proteins have completely different amino acid sequences, their protein structures are virtually identical, which 
reflects their near identical purposes. 

Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based 

See also: structural motif and structural domain. 

Bioinformatics 285 

Comparative genomics 

The core of comparative genome analysis is the establishment of the correspondence between genes (orthology 
analysis) or other genomic features in different organisms. It is these intergenomic maps that make it possible to 
trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events 
acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual 
nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, 
transposition, deletion and insertion. Ultimately, whole genomes are involved in processes of hybridization, 
polyploidization and endosymbiosis, often leading to rapid speciation. The complexity of genome evolution poses 
many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectra of 
algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and 
approximation algorithms for problems based on parsimony models to Markov Chain Monte Carlo algorithms for 
Bayesian analysis of problems based on probabilistic models. 

Many of these studies are based on the homology detection and protein families computation. 

Modeling biological systems 

Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of 
metabolites and enzymes which comprise metabolism, signal transduction pathways and gene regulatory networks) 
to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution 
attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms. 

High-throughput image analysis 

Computational technologies are used to accelerate or fully automate the processing, quantification and analysis of 
large amounts of high-information-content biomedical imagery. Modern image analysis systems augment an 
observer's ability to make measurements from a large or complex set of images, by improving accuracy, objectivity, 
or speed. A fully developed analysis system may completely replace the observer. Although these systems are not 
unique to biomedical imagery, biomedical imaging is becoming more important for both diagnostics and research. 
Some examples are: 

• high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, 
cytohistopathology, Bioimage informatics) 

clinical image analysis and visualization 

determining the real-time air-flow patterns in breathing lungs of living animals 

quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury 
making behavioral observations from extended video recordings of laboratory animals 
infrared measurements for metabolic activity determination 
inferring clone overlaps in DNA mapping, e.g. the Sulston score 

Protein-protein docking 

In the last two decades, tens of thousands of protein three-dimensional structures have been determined by X-ray 
crystallography and Protein nuclear magnetic resonance spectroscopy (protein NMR). One central question for the 
biological scientist is whether it is practical to predict possible protein-protein interactions only based on these 3D 
shapes, without doing protein-protein interaction experiments. A variety of methods have been developed to tackle 
the Protein-protein docking problem, though it seems that there is still much work to be done in this field. 

Bioinformatics 286 

Software and tools 

Software tools for bioinformatics range from simple command-line tools, to more complex graphical programs and 
standalone web-services available from various bioinformatics companies or public institutions. 

Web services in bioinformatics 

SOAP and REST-based interfaces have been developed for a wide variety of bioinformatics applications allowing an 
application running on one computer in one part of the world to use algorithms, data and computing resources on 
servers in other parts of the world. The main advantages derive from the fact that end users do not have to deal with 
software and database maintenance overheads. 

Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA 
(Multiple Sequence Alignment) and BSA (Biological Sequence Analysis). The availability of these service-oriented 
bioinformatics resources demonstrate the applicability of web based bioinformatics solutions, and range from a 
collection of standalone tools with a common data format under a single, standalone or web-based interface, to 
integrative, distributed and extensible bioinformatics workflow management systems. 

See also 

Bioinformatics companies 

Health informatics 

Computational biomodeling 

Computational genomics 

DNA sequencing theory 

Dot plot (bioinformatics) 

Functional genomics 

Margaret Oakley Dayhoff 

Metabolic network modelling 

Molecular design software 

Molecular modeling on GPU 


Protein-protein interaction prediction 

List of scientific journals in bioinformatics 

Nucleic acid simulation software 


• Achuthsankar S Nair Computational Biology & Bioinformatics - A gentle Overview (http://print.achuth., Communications of Computer Society of India, January 2007 

• Alum, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. ISBN 
1584884061 (Chapman & Hall/Crc Computer and Information Science Series) 

• Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001. ISBN 

• Barnes, M.R. and Gray, I.C., eds., Bioinformatics for Geneticists, first edition. Wiley, 2003. ISBN 0-470-84394-2 

• Baxevanis, A.D. and Ouellette, B.F.F., eds., Bioinformatics: A Practical Guide to the Analysis of Genes and 
Proteins, third edition. Wiley, 2005. ISBN 0-471-47878-4 

• Baxevanis, A.D., Petsko, G.A., Stein, L.D., and Stormo, G.D., eds., Current Protocols in Bioinformatics. Wiley, 
2007. ISBN 0-471-25093-7 

• Claverie, J.M. and C. Notredame, Bioinformatics for Dummies. Wiley, 2003. ISBN 0-7645-1696-5 

Bioinformatics 287 

Cristianini, N. and Hahn, M. Introduction to Computational Genomics (http://www.computational-genomics. 

net/), Cambridge University Press, 2006. (ISBN 9780521671910 I ISBN 0521671914) 

Durbin, R., S. Eddy, A. Krogh and G. Mitchison, Biological sequence analysis. Cambridge University Press, 

1998. ISBN 0-521-62971-3 

Gilbert, D. Bioinformatics software resources (http://bib.oxfordjournals.Org/cgi/content/abstract/5/3/300). 

Briefings in Bioinformatics, Briefings in Bioinformatics, 2004 5(3):300-304. 

Keedwell, E., Intelligent Bioinformatics: The Application of Artificial Intelligence Techniques to Bioinformatics 

Problems. Wiley, 2005. ISBN 0-470-02175-6 

Kohane, et al. Microarrays for an Integrative Genomics. The MIT Press, 2002. ISBN 0-262-1 1271-X 

Lund, O. et al. Immunological Bioinformatics. The MIT Press, 2005. ISBN 0-262-12280-4 

Michael S. Waterman, Introduction to Computational Biology: Sequences, Maps and Genomes. CRC Press, 1995. 

ISBN 0-412-99391-0 

Mount, David W. Bioinformatics: Sequence and Genome Analysis Spring Harbor Press, May 2002. ISBN 


Pachter, Lior and Sturmfels, Bernd. "Algebraic Statistics for Computational Biology" Cambridge University 

Press, 2005. ISBN 0-521-85700-7 

Pevzner, Pavel A. Computational Molecular Biology: An Algorithmic Approach The MIT Press, 2000. ISBN 


Soinov, L. Bioinformatics and Pattern Recognition Come Together ( 

view/8/5) Journal of Pattern Recognition Research ( JPRR (, Vol 1 (1) 2006 p. 37-41 

Tisdall, James. "Beginning Perl for Bioinformatics" O'Reilly, 2001. ISBN 0-596-00080-4 

Dedicated issue of Philosophical Transactions B on Bioinformatics freely available (http://publishing. 

Catalyzing Inquiry at the Interface of Computing and Biology (2005) CSTB report ( 

catalog/ 11480.html) 

Calculating the Secrets of Life: Contributions of the Mathematical Sciences and computing to Molecular Biology 

(1995) ( 

Foundations of Computational and Systems Biology MIT Course ( 


Computational Biology: Genomes, Networks, Evolution Free MIT Course ( 

Algorithms for Computational Biology Free MIT Course ( 


• Zhang, Z., Cheung, K.H. and Townsend, J. P. Bringing Web 2.0 to bioinformatics, Briefing in Bioinformatics. In 
press ( 

External links 

• Bioinformatics Organization ( 

• Bioinformatics Research Groups ( 
Research_Groups/) - Google Directory 

• EMBnet ( 

• Open Bioinformatics Foundation ( 

Genomics 288 


Genomics is a discipline in genetics concerning the study of the genomes of organisms. The field includes intensive 
efforts to determine the entire DNA sequence of organisms and fine-scale genetic mapping efforts. The field also 
includes studies of intragenomic phenomena such as heterosis, epistasis, pleiotropy and other interactions between 
loci and alleles within the genome. In contrast, the investigation of the roles and functions of single genes is a 
primary focus of molecular biology or genetics and is a common topic of modern medical and biological research. 
Research of single genes does not fall into the definition of genomics unless the aim of this genetic, pathway, and 
functional information analysis is to elucidate its effect on, place in, and response to the entire genome's networks. 

For the United States Environmental Protection Agency, "the term "genomics" encompasses a broader scope of 
scientific inquiry associated technologies than when genomics was initially considered. A genome is the sum total of 
all an individual organism's genes. Thus, genomics is the study of all the genes of a cell, or tissue, at the DNA 
(genotype), mRNA (transcriptome), or protein (proteome) levels." 


Genomics was established by Fred Sanger when he first sequenced the complete genomes of a virus and a 
mitochondrion. His group established techniques of sequencing, genome mapping, data storage, and bioinformatic 
analyses in the 1970-1980s. A major branch of genomics is still concerned with sequencing the genomes of various 
organisms, but the knowledge of full genomes has created the possibility for the field of functional genomics, mainly 
concerned with patterns of gene expression during various conditions. The most important tools here are microarrays 
and bioinformatics. Study of the full set of proteins in a cell type or tissue, and the changes during various 
conditions, is called proteomics. A related concept is materiomics, which is defined as the study of the material 
properties of biological materials (e.g. hierarchical protein structures and materials, mineralized biological tissues, 
etc.) and their effect on the macroscopic function and failure in their biological context, linking processes, structure 
and properties at multiple scales through a materials science approach. The actual term 'genomics' is thought to have 
been coined by Dr. Tom Roderick, a geneticist at the Jackson Laboratory (Bar Harbor, ME) over beer at a meeting 
held in Maryland on the mapping of the human genome in 1986. 

In 1972, Walter Fiers and his team at the Laboratory of Molecular Biology of the University of Ghent (Ghent, 
Belgium) were the first to determine the sequence of a gene: the gene for Bacteriophage MS2 coat protein. In 
1976, the team determined the complete nucleotide-sequence of bacteriophage MS2-RNA. The first DNA-based 
genome to be sequenced in its entirety was that of bacteriophage 0-X174; (5,368 bp), sequenced by Frederick 
Sanger in 1977. [4] 

The first free-living organism to be sequenced was that of Haemophilus influenzae (1.8 Mb) in 1995, and since then 
genomes are being sequenced at a rapid pace. 

As of September 2007, the complete sequence was known of about 1879 viruses , 577 bacterial species and 
roughly 23 eukaryote organisms, of which about half are fungi. Most of the bacteria whose genomes have been 
completely sequenced are problematic disease-causing agents, such as Haemophilus influenzae. Of the other 
sequenced species, most were chosen because they were well-studied model organisms or promised to become good 
models. Yeast {Saccharomyces cerevisiae) has long been an important model organism for the eukaryotic cell, while 
the fruit fly Drosophila melanogaster has been a very important tool (notably in early pre-molecular genetics). The 
worm Caenorhabditis elegans is an often used simple model for multicellular organisms. The zebrafish Brachydanio 
rerio is used for many developmental studies on the molecular level and the flower Arabidopsis thaliana is a model 
organism for flowering plants. The Japanese pufferfish (Takifugu rubripes) and the spotted green pufferfish 
(Tetraodon nigroviridis) are interesting because of their small and compact genomes, containing very little 
non-coding DNA compared to most species. The mammals dog (Canis familiaris), brown rat (Rattus 

Genomics 289 

norvegicus), mouse (Mus musculus), and chimpanzee {Pan troglodytes) are all important model animals in medical 

Human genomics 

A rough draft of the human genome was completed by the Human Genome Project in early 2001, creating much 
fanfare. By 2007 the human sequence was declared "finished" (less than one error in 20,000 bases and all 
chromosomes assembled). Display of the results of the project required significant bioinformatics resources. The 
sequence of the human reference assembly can be explored using the UCSC Genome Browser. 

Bacteriophage genomics 

Bacteriophages have played and continue to play a key role in bacterial genetics and molecular biology. Historically, 
they were used to define gene structure and gene regulation. Also the first genome to be sequenced was a 
bacteriophage. However, bacteriophage research did not lead the genomics revolution, which is clearly dominated by 
bacterial genomics. Only very recently has the study of bacteriophage genomes become prominent, thereby enabling 
researchers to understand the mechanisms underlying phage evolution. Bacteriophage genome sequences can be 
obtained through direct sequencing of isolated bacteriophages, but can also be derived as part of microbial genomes. 
Analysis of bacterial genomes has shown that a substantial amount of microbial DNA consists of prophage 
sequences and prophage-like elements. A detailed database mining of these sequences offers insights into the role of 
prophages in shaping the bacterial genome. 

Cyanobacteria genomics 

At present there are 24 cyanobacteria for which a total genome sequence is available. 15 of these cyanobacteria come 
from the marine environment. These are six Prochlorococcus strains, seven marine Synechococcus strains, 
Trichodesmium erythraeum IMS 101 and Crocosphaera watsonii WH8501. Several studies have demonstrated how 
these sequences could be used very successfully to infer important ecological and physiological characteristics of 
marine cyanobacteria. However, there are many more genome projects currently in progress, amongst those there are 
further Prochlorococcus and marine Synechococcus isolates, Acaryochloris and Prochloron, the N -fixing 
filamentous cyanobacteria Nodularia spumigena, Lyngbya aestuarii and Lyngbya majuscula, as well as 
bacteriophages infecting marine cyanobaceria. Thus, the growing body of genome information can also be tapped in 
a more general way to address global problems by applying a comparative approach. Some new and exciting 
examples of progress in this field are the identification of genes for regulatory RNAs, insights into the evolutionary 
origin of photosynthesis, or estimation of the contribution of horizontal gene transfer to the genomes that have been 

See also 

Full Genome Sequencing 
Computational genomics 
Predictive Medicine 
Personal genomics 

Genomics 290 


[I] EPA Interim Genomics Policy ( 

[2] Min Jou W, Haegeman G, Ysebaert M, Fiers W (1972). "Nucleotide sequence of the gene coding for the bacteriophage MS2 coat protein". 

Nature 237 (5350): 82-88. doi:10.1038/237082a0. PMID 4555447. 
[3] Fiers W, Contreras R, Duerinck F, Haegeman G, Iserentant D, Merregaert J, Min Jou W, Molemans F, Raeymaekers A, Van den Berghe A, 

Volckaert G, Ysebaert M (1976). "Complete nucleotide sequence of bacteriophage MS2 RNA: primary and secondary structure of the 

replicase gene". Nature 260 (5551): 500-507. doi:10.1038/260500a0. PMID 1264203. 
[4] Sanger F, Air GM, Barrell BG, Brown NL, Coulson AR, Fiddes CA, Hutchison CA, Slocombe PM, Smith M (1977). "Nucleotide sequence of 

bacteriophage phi X174 DNA". Nature 265 (5596): 687-695. doi:10.1038/265687a0. PMID 870828. 
[5] The Viral Genomes Resource, NCBI Friday, 14 September 2007 ( 
[6] Genome Project Statistic, NCBI Friday, 14 September 2007 ( 
[7] BBC article Human gene number slashed from Wednesday, 20 October 2004 ( 
[8] CBSE News, Thursday, 16 October 2003 ( 
[9] NHGRI, pressrelease of the publishing of the dog genome ( 
[10] McGrath S and van Sinderen D, ed (2007). Bacteriophage: Genetics and Molecular Biology ( (1st 

ed.). Caister Academic Press. ISBN 978-1-904455-14-1. . 

[II] Herrero A and Flores E, ed (2008). The Cyanobacteria: Molecular Biology, Genomics and Evolution ( 
cyan) (1st ed.). Caister Academic Press. ISBN 978-1-904455-15-8. . 

External links 

• Genomics Directory ( A one-stop biotechnology resource center for 
bioentrepreneurs, scientists, and students 

• Annual Review of Genomics and Human Genetics ( 

• BMC Genomics ( A BMC journal on Genomics 

• Genomics ( UK companies and laboratories* Genomics journal 

• Genomics and Quantitative Genetics ( An international electronic, open 
access journal publishing, inter alia, genomics research. 

• ( An openfree wiki based Genomics portal 

• NHGRI ( US government's genome institute 

• Pharmacogenomics in Drug Discovery and Development ( 
pharmacology+and+toxicology/book/978-1-58829-887-4), a book on pharmacogenomics, diseases, 
personalized medicine, and therapeutics 

• Tishchenko P. D. Genomics: New Science in the New Cultural Situation ( 

• Undergraduate program on Genomic Sciences (Spanish) ( One of the first 
undergraduate programs in the world 

• JCVI Comprehensive Microbial Resource ( 

• Pathema: A Clade Specific Bioinformatics Resource Center ( 

• ( The first Korean Genome published and the sequence is available 

• GenomicsNetwork ( Looks at the development and use of the science and 
technologies of genomics. 

• Institute for Genome Sciences ( Genomics research. 

• Institute for Genome Sciences ( Genomics research. 

Computational genomics 291 

Computational genomics 

Computational genomics refers to the use of computational analysis to decipher biology from genome sequences 
and related data , including both DNA and RNA sequence as well as other "post-genomic" data (i.e. experimental 
data obtained with technologies that require the genome sequence, such as genomic DNA microarrays). As such, 
computational genomics may be regarded as a subset of bioinformatics, but with a focus on using whole genomes 
(rather than individual genes) to understand the principles of how the DNA of a species controls its biology at the 
molecular level and beyond. With the current abundance of massive biological datasets, computational studies have 
become one of the most important means to biological discovery. 


The roots of computational genomics are shared with those of bioinformatics. During the 1960s, Margaret Dayhoff 
and others at the National Biomedical Research Foundation assembled databases of homologous protein sequences 
for evolutionary study. Their research developed a phylogenetic tree that determined the evolutionary changes that 
were required for a particular protein to change into another protein based on the underlying amino acid sequences. 
This led them to create a scoring matrix that assessed the likelihood of one protein being related to another. 

Beginning in the 1980s, databases of genome sequences began to be recorded, but this presented new challenges in 

the form of searching and comparing the databases of gene information. Unlike text-searching algorithms that are 

used on websites such as google or Wikipedia, searching for sections of genetic similarity requires one to find strings 

that are not simply identical, but similar. This led to the development of the Needleman-Wunsch algorithm, which is 

a dynamic programming algorithm for comparing sets of amino acid sequences with each other by using scoring 

matrices derived from the earlier research by Dayhoff. Later, the BLAST algorithm was developed for performing 

fast, optimized searches of gene sequence databases. BLAST and its derivatives are probably the most widely-used 

algorithms for this purpose. 

The emergence of the phrase "computational genomics" coincides with the availability of complete sequenced 
genomes in the mid-to-late 1990's. The first meeting of the Annual Conference on Computational Genomics was 
organized by scientists from The Institute for Genomic Research (TIGR) in 1998, providing a forum for this 
speciality and effectively distinguishing this area of science from the more general fields of Genomics or 
Computational Biology. The first use of this term in scientific literature, according to MEDLINE abstracts, was 

just one year earlier in Nucleic Acids Research. . The final Computational Genomics conference was held in 2006, 
featuring a keynote talk by Nobel Laureate Barry Marshall, co-discoverer of the link between Helicobacter pylori 
and stomach ulcers. As of 2010, the leading conferences in the field include Intelligent Systems for Molecular 
Biology (ISMB), RECOMB, and the Cold Spring Harbor Laboratory and Sanger Institute's meetings titled "Biology 
of Genomes" and "Genome Informatics". 

The development of computer-assisted mathematics (using products such as Mathematica or Matlab) has helped 
engineers, mathematicians and computer scientists to start operating in this domain, and a public collection of case 


studies and demonstrations is growing, ranging from whole genome comparisons to gene expression analysis. . 
This has increased the introduction of different ideas, including concepts from systems and control, information 
theory, strings analysis and data mining. It is anticipated that computational approaches will become and remain a 
standard topic for research and teaching, while students fluent in both topics start being formed in the multiple 
courses created in the past few years. 

Computational genomics 292 

Contributions of computational genomics research to biology 

Contributions of computational genomics research to biology include : 

• discovering subtle patterns in genomic sequences 

• proposing cellular signalling networks 

• proposing mechanisms of genome evolution 

• predict precise locations of all human genes using comparative genomics techniques with several mammalian and 
vertebrate species 

• predict conserved genomic regions that are related to early embryonic development 

• discover potential links between repeated sequence motifs and tissue-specific gene expression 

• measure regions of genomes that have undergone unusually rapid evolution 

See also 



Computational biology 




Computational epigenetics 


[1] Koonin EV (2001) Computational Genomics, National Center for Biotechnology Information, National Library of Medicine, NIH (PubMed 

ID: 11267880) 
[2] Computational Genomics and Proteomics at MIT ( 

[3] David Mount (2000), Bioinformatics, Sequence and Genome Analysis, pp. 2-3, Cold Spring Harbor Laboratory Press, ISBN 0-87969-597-8 
[4] T.A. Brown (1999), Genomes, John Wiley & Sons, ISBN 0-471-31618-0 
[5] [backPid]=67&cHash=fd69079f5e The 7th Annual Conference on Computational Genomics (2004) ( 

[6] The 9th Annual Conference on Computational Genomics (2006) ( 
[7] A. Wagner (1997), A computational genomics approach to the identification of gene networks, Nucleic Acids Res., Sep 15;25(18):3594-604, 

ISSN 0305-1048 
[8] Cristianini, N. and Hahn, M. Introduction to Computational Genomics (, Cambridge University 

Press, 2006. (ISBN 9780521671910 I ISBN 0521671914) 

External links 

• Harvard Extension School Biophysics 101, Genomics and Computational Biology, 

• University of Bristol course in Computational Genomics, 




Proteomics is the large-scale study of 
proteins, particularly their structures and 
functions. Proteins are vital parts of 

living organisms, as they are the main 
components of the physiological metabolic 

pathways of cells. The term "proteomics" 

was first coined in 1997 to make an 

analogy with genomics, the study of the 

genes. The word "proteome" is a blend of 

"protein" and "genome", and was coined by 

Marc Wilkins in 1994 while working on the 

concept as a PhD student. The 

proteome is the entire complement of 

proteins, including the modifications 

made to a particular set of proteins, 

produced by an organism or system. This 

will vary with time and distinct requirements, or stresses, that a cell or organism undergoes. 

Robotic preparation of MALDI mass spectrometry samples on a sample carrier. 

Complexity of the problem 

After genomics, proteomics is considered the next step in the study of biological systems. It is much more 
complicated than genomics mostly because while an organism's genome is more or less constant, the proteome 
differs from cell to cell and from time to time. This is because distinct genes are expressed in distinct cell types. This 
means that even the basic set of proteins which are produced in a cell needs to be determined. 

In the past this was done by mRNA analysis, but this was found not to correlate with protein content. It is now 


known that mRNA is not always translated into protein, and the amount of protein produced for a given amount of 
mRNA depends on the gene it is transcribed from and on the current physiological state of the cell. Proteomics 
confirms the presence of the protein and provides a direct measure of the quantity present. 

Post-translational modifications 

Not only does the translation from mRNA cause differences, many proteins are also subjected to a wide variety of 
chemical modifications after translation. A lot of these post-translational modifications are critical to the protein's 


One such modification is phosphorylation, which happens to many enzymes and structural proteins in the process of 

cell signaling. The addition of a phosphate to particular amino acids — most commonly serine and threonine 

mediated by serine/threonine kinases, or more rarely tyrosine mediated by tyrosine kinases — causes a protein to 

become a target for binding or interacting with a distinct set of other proteins that recognize the phosphorylated 


Because protein phosphorylation is one of the most-studied protein modifications many "proteomic" efforts are 
geared to determining the set of phosphorylated proteins in a particular cell or tissue-type under particular 
circumstances. This alerts the scientist to the signaling pathways that may be active in that instance. 

Proteomics 294 


Ubiquitin is a small protein that can be affixed to certain protein substrates by enzymes called E3 ubiquitin ligases. 
Determining which proteins are poly-ubiquitinated can be helpful in understanding how protein pathways are 
regulated. This is therefore an additional legitimate "proteomic" study. Similarly, once it is determined what 
substrates are ubiquitinated by each ligase, determining the set of ligases expressed in a particular cell type will be 

Additional modifications 

Listing all the protein modifications that might be studied in a "Proteomics" project would require a discussion of 
most of biochemistry; therefore, a short list will serve here to illustrate the complexity of the problem. In addition to 
phosphorylation and ubiquitination, proteins can be subjected to (among others) methylation, acetylation, 
glycosylation, oxidation and nitrosylation. Some proteins undergo ALL of these modifications, often in 
time-dependent combinations, aptly illustrating the potential complexity one has to deal with when studying protein 
structure and function. 

Distinct proteins are made under distinct settings 

Even if one is studying a particular cell type, that cell may make different sets of proteins at different times, or under 
different conditions. Furthermore, as mentioned, any one protein can undergo a wide range of post-translational 

Therefore a "proteomics" study can become quite complex very quickly, even if the object of the study is very 
restricted. In more ambitious settings, such as when a biomarker for a tumor is sought - when the proteomics 
scientist is obliged to study sera samples from multiple cancer patients - the amount of complexity that must be dealt 
with is as great as in any modern biological project. 

Limitations to genomic study 

Scientists are very interested in proteomics because it gives a much better understanding of an organism than 
genomics. First, the level of transcription of a gene gives only a rough estimate of its level of expression into a 
protein. An mRNA produced in abundance may be degraded rapidly or translated inefficiently, resulting in a small 
amount of protein. Second, as mentioned above many proteins experience post-translational modifications that 
profoundly affect their activities; for example some proteins are not active until they become phosphorylated. 
Methods such as phosphoproteomics and glycoproteomics are used to study post-translational modifications. Third, 
many transcripts give rise to more than one protein, through alternative splicing or alternative post-translational 
modifications. Fourth, many proteins form complexes with other proteins or RNA molecules, and only function in 
the presence of these other molecules. Finally, protein degradation rate plays an important role in protein content. 

Proteomics 295 

Methods of studying proteins 

Determining proteins which are post-translationally modified 

One way in which a particular protein can be studied is to develop an antibody which is specific to that modification. 
For example, there are antibodies which only recognize certain proteins when they are tyrosine-phosphorylated, 
known as phospho-specific antibodies; also, there are antibodies specific to other modifications. These can be used 
to determine the set of proteins that have undergone the modification of interest. 

For sugar modifications, such as glycosylation of proteins, certain lectins have been discovered which bind sugars. 
These too can be used. 

A more common way to determine post-translational modification of interest is to subject a complex mixture of 
proteins to electrophoresis in "two-dimensions", which simply means that the proteins are electrophoresed first in 
one direction, and then in another... this allows small differences in a protein to be visualized by separating a 
modified protein from its unmodified form. This methodology is known as "two-dimensional gel electrophoresis". 

Recently, another approach has been developed called PROTOMAP which combines SDS-PAGE with shotgun 
proteomics to enable detection of changes in gel-migration such as those caused by proteolysis or post translational 

Determining the existence of proteins in complex mixtures 

Classically, antibodies to particular proteins or to their modified forms have been used in biochemistry and cell 
biology studies. These are among the most common tools used by practicing biologists today. 

For more quantitative determinations of protein amounts, techniques such as ELIS As can be used. 

For proteomic study, more recent techniques such as matrix-assisted laser desorption/ionization (MALDI) have been 
employed for rapid determination of proteins in particular mixtures and increasingly electrospray ionization (ESI). 

Establishing protein-protein interactions 

Most proteins function in collaboration with other proteins, and one goal of proteomics is to identify which proteins 
interact. This is especially useful in determining potential partners in cell signaling cascades. 

Several methods are available to probe protein-protein interactions. The traditional method is yeast two-hybrid 
analysis. New methods include protein microarrays, immunoaffinity chromatography followed by mass 
spectrometry, dual polarisation interferometry and experimental methods such as phage display and computational 

Practical applications of proteomics 

One of the most promising developments to come from the study of human genes and proteins has been the 
identification of potential new drugs for the treatment of disease. This relies on genome and proteome information to 
identify proteins associated with a disease, which computer software can then use as targets for new drugs. For 
example, if a certain protein is implicated in a disease, its 3D structure provides the information to design drugs to 
interfere with the action of the protein. A molecule that fits the active site of an enzyme, but cannot be released by 
the enzyme, will inactivate the enzyme. This is the basis of new drug-discovery tools, which aim to find new drugs 
to inactivate proteins involved in disease. As genetic differences among individuals are found, researchers expect to 
use these techniques to develop personalized drugs that are more effective for the individual. 

A computer technique which attempts to fit millions of small molecules to the three-dimensional structure of a 
protein is called "virtual ligand screening". The computer rates the quality of the fit to various sites in the protein, 
with the goal of either enhancing or disabling the function of the protein, depending on its function in the cell. A 

Proteomics 296 

good example of this is the identification of new drugs to target and inactivate the HIV-1 protease. The HIV-1 
protease is an enzyme that cleaves a very large HIV protein into smaller, functional proteins. The virus cannot 
survive without this enzyme; therefore, it is one of the most effective protein targets for killing HIV. 


The FDA defines a biomarker as, "A characteristic that is objectively measured and evaluated as an indicator of 
normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention". 

Understanding the proteome, the structure and function of each protein and the complexities of protein-protein 
interactions will be critical for developing the most effective diagnostic techniques and disease treatments in the 

An interesting use of proteomics is using specific protein biomarkers to diagnose disease. A number of techniques 
allow to test for proteins produced during a particular disease, which helps to diagnose the disease quickly. 
Techniques include western blot, immunohistochemical staining, enzyme linked immunosorbent assay (ELISA) or 
mass spectrometry. 

Current research methodologies 

There are many approaches to attempting to characterize the human proteome, which is estimated to exceed 100,000 
unique forms, 25,000 genes plus post-translational modifications. 

See also 

Proteomic chemistry 




List of omics topics in biology 



Shotgun proteomics 

Top-down proteomics 

Bottom-up proteomics 

Systems biology 




Functional genomics 

Activity based proteomics 

Proteomics 297 

Protein databases 


Protein Information Resource (PIR) 


Protein Data Bank (PDB) 

National Center for Biotechnology Information (NCBI) 

Human Protein Reference Database 

Proteomics Identifications Database (PRIDE) 

Proteopedia The collaborative, 3D encyclopedia of proteins and other molecules. 


[1] Anderson NL, Anderson NG (1998). "Proteome and proteomics: new technologies, new concepts, and new words". Electrophoresis 19 (11): 

1853-61. doi:10.1002/elps.H50191103. PMID 9740045. 
[2] Blackstock WP, Weir MP (1999). "Proteomics: quantitative and physical mapping of cellular proteins". Trends Biotechnol. 17 (3): 121—7. 

doi:10.1016/S0167-7799(98)01245-l. PMID 10189717. 
[3] P. James (1997). "Protein identification in the post-genome era: the rapid rise of proteomics.". Quarterly reviews of biophysics 30 (4): 

279-331. doi:10.1017/S0033583597003399. PMID 9634650. 
[4] Marc R. Wilkins, Christian Pasquali, Ron D. Appel, Keli Ou, Olivier Golaz, Jean-Charles Sanchez, Jun X. Yan, Andrew. A. Gooley, Graham 

Hughes, Ian Humphery-Smith, Keith L. Williams & Denis F. Hochstrasser (1996). "From Proteins to Proteomes: Large Scale Protein 

Identification by Two- Dimensional Electrophoresis and Amino Acid Analysis". Nature Biotechnology 14 (1): 61—65. 

doi:10.1038/nbt0196-61. PMID 9636313. 
[5] UNSW Staff Bio: Professor Marc Wilkins (http://www.babs. au/directory.php?personnelID=12) 
[6] Simon Rogers, Mark Girolami, Walter Kolch, Katrina M. Waters, Tao Liu, Brian Thrall and H. Steven Wiley (2008). "Investigating the 

correspondence between transcriptomic and proteomic expression profiles using coupled cluster models". Bioinformatics 24 (24): 2894—2900. 

doi:10.1093/bioinformatics/btn553. PMID 18974169. 
[7] Vikas Dhingraa, Mukta Gupta, Tracy Andacht and Zhen F. Fu (2005). "New frontiers in proteomics research: A perspective". International 

Journal of Pharmaceutics 299 (1-2): 1-18. doi:10.1016/j.ijpharm.2005.04.010. PMID 15979831. 
[8] Buckingham, Steven (May 2003). "The major world of microRNAs" ( 

.Retrieved 2009-01-14. 
[9] Olsen JV, Blagoev B, Gnad F, Macek B, Kumar C, Mortensen P, Mann M. (2006). "Global, in vivo, and site-specific phosphorylation 

dynamics in signaling networks". Cell 127 (3): 635-648. doi:10.1016/j.cell.2006.09.026. PMID 17081983. 
[10] Archana Belle, Amos Tanay, Ledion Bitincka, Ron Shamir and Erin K. O'Shea (2006). "Quantification of protein half-lives in the budding 

yeast proteome" ( PNAS 103 (35): 13004—13009. 

doi:10.1073/pnas.0605420103. PMID 16916930. PMC 1550773. 


• Belhajjame, K. et al. Proteome Data Integration: Characteristics and Challenges ( 
2005/proceedings/papers/525.pdf). Proceedings of the UK e-Science All Hands Meeting, ISBN 1-904425-53-4, 
September 2005, Nottingham, UK. 

• Twyman RM (2004). Principles Of Proteomics (Advanced Text Series). Oxford, UK: BIOS Scientific Publishers. 
ISBN 1-85996-273-4. (covers almost all branches of proteomics) 

• Naven T, Westermeier R (2002). Proteomics in Practice: A Laboratory Manual of Proteome Analysis. Weinheim: 
Wiley-VCH. ISBN 3-527-30354-5. (focused on 2D-gels, good on detail) 

• Liebler DC (2002). Introduction to proteomics: tools for the new biology. Totowa, NJ: Humana Press. 
ISBN 0-89603-992-7. ISBN 0-585-41879-9 (electronic, on Netlibrary?), ISBN 0-89603-991-9 hbk 

• Wilkins MR, Williams KL, Appel RD, Hochstrasser DF (1997). Proteome Research: New Frontiers in 
Functional Genomics (Principles and Practice). Berlin: Springer. ISBN 3-540-62753-7. 

• Arora PS, Yamagiwa H, Srivastava A, Bolander ME, Sarkar G (2005). "Comparative evaluation of two 
two-dimensional gel electrophoresis image analysis software applications using synovial fluids from patients with 
joint disease" (http://www. asp?genre=article&doi=10.1007/s00776-004-0878-0). J 

Proteomics 298 

Orthop Sci 10 (2): 160-6. doi:10.1007/s00776-004-0878-0. PMID 15815863. 

• Rediscovering Biology Online Textbook. Unit 2 Proteins and Proteomics. 1997-2006. 

• Weaver RF (2005). Molecular biology (3rd ed.). New York: McGraw-Hill. pp. 840-9. ISBN 0-07-28461 1-9. 

• Reece J, Campbell N (2002). Biology (6th ed.). San Francisco: Benjamin Cummings. pp. 392—3. 
ISBN 0-8053-6624-5. 

• Hye A, Lynham S, Tha